Originally posted by @Jon Q.:
Originally posted by @Charles Worth:
@Jon Q. here too they are opening the micro units and a lot of stuff related to various new uses for space but those are based on an intersection of various trends.
I don't disagree with the sentiment expressed just the timing, predictable effects and outcomes. i just think there will be a lot of changes that make us look back and wonder how we didn't see it coming and how we ever lived differently (like the smartphone today for instance) but it won't be that predictable and the effect will certainly be big over time but not immediate and not as drastic as was implied.
Just because with past trends change came slowly, doesn't follow that these changes will. I don't know if I necessarily agree. Moore's law and thus technological innovation will grow exponentially. That means that you will likely see things begin to change very rapidly.
There has been more technological change in the last 10 years than in the previous 100 years.
Plan for the worst. Expect the best.
Moore's Law is only applicable to certain types of technological problems. If everything could just be boiled down to calculations per second, cancer would already be cured.
At a certain level AI is a people problem. As I stated previously AI is not easy. Even when it becomes more advanced and perhaps even more mainstream, there will be somewhat of an upper limit on the number of people working on problems. AI isn't a magic bullet that figures out how to do everything itself. It is a method of computing that allows a system to learn based on either massive amounts of data or via trial and error.
But ultimately, someone - a human writing the software - needs to understand what problem that they need the software to solve and to understand how to make the computer learn the best way to solve the problem.
Moore's Law would only be applicable after the above criteria have been met, the developer wrote the code, and the computer had to process through many computations to arrive at a solution.
Right now, the bottleneck is:
1. Not understanding the problem correctly.
2. Not understanding how to make the computer solve the problem autonomously.
3. Not enough smart people to solve #1 and #2
For instance, it took an entire team of AI students about 10 years to create a poker AI that could beat a world-class human poker player heads-up.
In the commercial world this exercise would have been entirely unprofitable. First off, the potential profit would be greatly eclipsed by the cost of hiring a team of some of the smartest AI people in the world for 10 years. Second, it would be far more easy and cost effective to teach someone over the course of a year to beat a world-class poker player.
And that was just to beat a poker player playing them one on one at a very specific game of poker. Add a second, third, or ninth player to the mix and they have to go back and and use an entirely different method to solve the problem because certain strategies work well in one environment but will be entirely wrong in a different environment.
Obviously, they will be better able to solve these new problems more quickly but not because of computational power, but because they know what path to take to solving the problem.
So the chance for exponential growth in AI is less tied to computational power and more towards figuring out an efficient method of teaching computers to learn that could be applied across multiple problems.
As @Charles Worth noted, this knowledge is likely to be at an incremental pace rather than an exponential one. Which is why even though people have been talking about AI for 30+ years, and computing power is astronomically greater than it was 30+ years ago, we've only seen incremental AI advancements. Only if there is a breakthrough in AI theory itself will growth become exponential.