The trend is to think of AI as advancing when it does certain narrow tasks better than humans. This idea is an old as computing itself: the race between man and machine. The concept dates back to the Turing test: if and when we will not be able to tell tasks performed by a computer from tasks done by a human. Today, it is found in current attempts to use deep learning, transformers and large-scale modeling to simulate how human intelligence works to eventually best it with practice.
It is also very limited, at least, relative to what we know about what it means to be human. It doesn’t have to be.
According to the productivity paradox, the evidence of the impact of AI on human productivity and job creation has been very scarce at best. Will ever-slightly-better tools solve this? Perhaps. But what if limited productivity impacts are tied to the core limitations of how we think of AI advances in the first place? We suggest that there are significant implications for how we design AI in the first place, if AGI (artificial general intelligence) is to be achieved.
Computer scientists always knew that AI progress should be measured in terms of how it makes us better, not how quickly it replaces us, but some of that ethos has been lost over time. Economists and social scientists know that human intelligence does not exist in a vacuum, but is informed by communities: firms, markets, cultures and educational systems to name a few. As such, modeling intelligence based on an analogy of the human in isolation is important, but exposes selection bias in the story. AI can augment human ability to participate in communities via insights. AI can even create new kinds of communities, with implications on productivity and human welfare. However, this requires a truly human-centered AI approach, which is what Machine Learning X Doing provides.
Part is this issue, one might argue, just stems from the lack of core technology companies started by social scientists: at the end of the day, this is not really a practical critique per se. The other trade deficit, however, which is more pragmatic, is the correspondingly low awareness of how all human organizations, from technology companies to governments and even individual users actually function from a technical standpoint. Many technologists are tempted to think of computing as “technical” expertise and organizations and human outcomes as “subject matter” expertise. However, it is more accurate to think of all fields are also being technical in their own ways, and to think of computing as being both technical and also a subject matter. Abstraction and technical expertise must be driven by the human condition that they ultimately serve to be relevant to them.
This is why Machine Learning X Doing is at the frontier of the interface of frontier economics and frontier computing. We believe that until these and related elements are rigorously integrated into how AI is created from a technical standpoint, the impact of AI on human productivity and job creation as well as Artificial General Intelligence (AGI) will remain elusive. Deep learning and transformers remain important, but the question is what next-level AI can build on these innovations to get closer to human productivity impacts and AGI more broadly.
The solution is here. Machine Learning X Doing. Next-level AI.