I suppose if you could piggy-back an algorithm onto a widely circulated program, then it would be much easier to setup and predict. However, we are still far from having any such infrastructure.
As for the future, keep in mind that we are slowly moving toward cloud computing. This means that in the future, the real computation power will be in server farms and not in individual homes.
In essence, distributed computing is kind of like taking advantage of computing power nobody is using. As computation becomes more efficient, distributed computing will have less computation power to work with.
I am sure there are also other advantages to having a single supercomputer vs large numbers of normal computers. Data transfer speed, for example, could be a major bottleneck in complex calculations.
Moreover, I think supercomputers also serve as a way to pioneer advancements in computers. The advancements in computation power that we get in normal computers come from the things we learned by building supercomputers.
thats the beauty of open source that you can piggy back.
about cloud computing, i think only the data were put into server but the computing power still at home, btw with cloud computing it means that internet bandwith will be increased significantly. it has to for supporting the connection from server and home, this alone means distributed computing will be more useful.
btw i already try one of the program. what it did :
1. it download data from server
2. conducting calculation process (i started yesterday)
3. uploading processed data back to server (just happen now)
so maybe not every test is suitable for distributed normal computer, but why should all test be made by central super computer when you can delegate simpler simulation ?
btw : todays normal computer is supercomputer compared to decades ago
Last edited: