Maybe the socialists are on to something after all? I just finished reading a couple of takedowns on the prospects for technology to free us from carbon-based fuels -- 100 Percent Wishful Thinking: the Green-Energy Cornucopia -- and for technology to deliver food without using much land -- An Engineer, an Economist, and an Ecomodernist Walk Into a Bar and Order a Free Lunch. Perhaps our environmental problems won't be solved as a side-effect of making money? Perhaps capitalism, while having been very successful for the last several hundred years, has a limited shelf-life?
Both the above referenced articles are by Stan Cox:
Background and Personal History: Stan was a wheat geneticist in the US Department of Agriculture for 13 years before joining The Land Institute in Salina, Kansas as a senior scientist in 2000. When not working as a plant breeder in the field and greenhouse, he has written three books: Sick Planet: Corporate Food and Medicine (Pluto Press, 2008); Losing Our Cool: Uncomfortable Truths About Our Air-Conditioned World (And Finding News Ways to Get Through the Summer) (The New Press, 2010); and Any Way You Slice It: The Past, Present, and Future of Rationing (The New Press, 2013). Since 2003, he has regularly written investigative pieces, op-eds, and other articles for a wide range of Internet and print publications. His articles have appeared in wide range of newspapers, including the New York Times, Washington Post, Los Angeles Times, and the Guardian, in 43 states and several countries.We may be approaching times where more forceful government intervention is required to manage crises. This is what happened, successfully, during the Great Depression and World War II. Since then technology has developed enormously and is putting unprecedented pressure on the environment. We need new laws to deal with this situation, and I believe they must go beyond the capitalist incentives that we've been using unsuccessfully.
It's clear with health care financing, for example, that the U.S. model of capitalism doesn't work. It has resulted in too much complexity and financialization as can be seen by comparison to comparable nations. The same might be said for the media business, which seems to dysfunctional and greatly in need of a more social model with checks and balances other than what are currently employed in our capitalist system.
Energy seems to be another area where our current model of capitalism is failing, as I referenced a couple posts back. Some form of greater government regulation of the legal and financial system is necessary. Whether or not we should call this socialism seems to be the question of the day, but perhaps not the right question.
We want to protect the earth before its life-sustaining resources are depleted or damaged beyond repair. To do this, capitalism needs to be restrained somehow. We can do this democratically if we put our minds to it.
ADDENDUM #1: https://en.wikipedia.org/wiki/AI_winter :
In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in artificial intelligence research. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or decades later.
The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI (then called the "American Association of Artificial Intelligence"). It is a chain reaction that begins with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research.[2] At the meeting, Roger Schank and Marvin Minsky—two leading AI researchers who had survived the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. Three years later, the billion-dollar AI industry began to collapse.
Hypes are common in many emerging technologies, such as the railway mania or the dot-com bubble. The AI winter is primarily a collapse in the perception of AI by government bureaucrats and venture capitalists...
There were two major winters in 1974–1980 and 1987–1993 and several smaller episodes, including the following:
- 1966: failure of machine translation
- 1970: abandonment of connectionism
- 1971–75: DARPA's frustration with the Speech Understanding Research program at Carnegie Mellon University
- 1973: large decrease in AI research in the United Kingdom in response to the Lighthill report
- 1973–74: DARPA's cutbacks to academic AI research in general
- 1987: collapse of the Lisp machine market
- 1988: cancellation of new spending on AI by the Strategic Computing Initiative
- 1993: expert systems slowly reaching the bottom
- 1990s: quiet disappearance of the fifth-generation computer project's original goals
The fizzle of the fifth generationADDENDUM #2: https://blog.piekniewski.info/2018/05/28/ai-winter-is-well-on-its-way/:
In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings. By 1991, the impressive list of goals penned in 1981 had not been met. Indeed, some of them had not been met in 2001, or 2011. As with other AI projects, expectations had run much higher than what was actually possible.
Deep learning has been at the forefront of the so called AI revolution for quite a few years now, and many people had believed that it is the silver bullet that will take us to the world of wonders of technological singularity (general AI). Many bets were made in 2014, 2015 and 2016 when still new boundaries were pushed, such as the Alpha Go etc. Companies such as Tesla were announcing through the mouths of their CEO's that fully self driving car was very close, to the point that Tesla even started selling that option to customers [to be enabled by future software update].The older post referenced above is from 2016. Excerpt:
We have now mid 2018 and things have changed. Not on the surface yet, NIPS conference is still oversold, the corporate PR still has AI all over its press releases, Elon Musk still keeps promising self driving cars and Google CEO keeps repeating Andrew Ng's slogan that AI is bigger than electricity. But this narrative begins to crack. And as I predicted in my older post, the place where the cracks are most visible is autonomous driving - an actual application of the technology in the real world.
My bet is that the self driving car will demolish the current AI hype. And I'm not talking about the assisted driving but full (level 5) autonomy, as only this makes the case for the gigantic investments made by numerous companies. Now don't get me wrong: I'd love to have one, my entire work is devoted to solving the fundamental problems that would allow for one. But at the same time, I'm astonished to see so many other people working in the field of AI, enclosed in their model domains not seeing the problem!
The key observation is this: a self driving car is a robotic device operating in an unrestricted environment. We cannot possibly assume that roadways are restricted domains since in reality, literally anything can happen in the middle of the road. There are several other problems which I have previously discussed, but the fundamental one is that we keep building AI as statistical pattern matchers. AI can fundamentally only deal with the stuff it has seen before, cannot anticipate, identify outliers (new unknown things) and react appropriately.
Now that being said, I think the time is right to actually solve the appropriate problems and I've put forward a broad proposal on how to approach AI differently - in summary learn the stuff that is constant - physics - rather then try to memorise all the corner cases.. The problem is, once there is an AI winter, everyone doing it will get equally busted, even the whistleblowers like me.ADDENDUM #3: I'm spending all day on Piekniewski's blog😀. From https://blog.piekniewski.info/2016/08/09/intelligence-is-real/ :
The inherent property of the AI booms is the enormous enthusiasm they create, particularly among the people who have no idea how these systems work and what their limitations are (like venture capitalists or government officials for example). The visions are typically very romantic: automatic translation of millions of phone calls, visual perception, cheap and capable robots, natural language communication with computers and more recently self driving cars (which are a form of autonomous robots). Who would not like to have these wonders? Notably there is a clear incentive to create hype: researchers need to get the research money. The best way to do it is to scare somebody in the government that SkyNet is about to be born (in some other country) therefore AI research needs the dime. Entrepreneurs need to convince VC's so they use a similar strategy. All that is quickly picked up by journalist, since the public loves the stories about killer AI and terminator. Eventually everybody starts jumping on the AI bandwagon.ADDENDUM #4: Good comment here (Clyde Schechter at Kevin Drum)
So here is what we've got: a field with a sexy name that no one really understands which promises wonders beyond imagination. What could possibly go wrong?
Well, yes, as Norbert Weiner proposed many decades back, anything that the meat machine can do can be simulated in a non meat machine. But I think this misses a few subtle points.My follow on thought is that we should do cost-benefit analyses, from the societal perspective, of investments in artificial intelligence such as autonomous vehicles. Is the ability to take a nap worth ceding autonomy?
1. Except perhaps for the challenge of doing it, I don't think anybody actually wants to build a full AI simulation of a human brain. It wouldn't be any more useful than a human brain, and we already have plenty of those lying around underutilized.
2. Perhaps we can succeed in building an AI that does a really good simulation of empathy (or, if you prefer, actually feels empathy--it doesn't matter for present purposes.). In fact, I'm sure we can. But what else will it do. The only model we have of empathy-capable intelligence is the human brain (OK, maybe some other animal brains, too--it doesn't matter for this point.) And that human brain also exhibits anger, churlishness, boredom, fatigue, spitefulness and a whole host of other things that we probably don't want our AI companion to emulate. But nobody has yet proved that it is possible, even in principle, to build an AI that exhibits empathy without exhibiting those other things. Maybe no such algorithm is, in principle, possible. Just as no algorithm can solve the halting problem. None of nature's versions of intelligence have empathy without also having the negative emotions. So until somebody actually constructs one, or until we have a detailed enough algorithmic understanding of empathy that we can prove theorems about it, we don't know if these things can ever be separated. If they cannot, then perhaps we will not want our AI companions after all.
3. A perfect simulation of the human brain would be very problematic in another way. Part of what is clearly part of our neurologic wiring is that we recognize that we have a body and that it provides us with sensory input. We know that sensory deprivation can lead to psychosis. Would a disembodied AI perfect simulation of the human brain just quickly go psychotic? I think there's a good chance of that.
In short, nobody really wants an AI that actually simulates the human brain. We want AI that selective emulates certain aspects of human brain function and omits others. Whether that is even possible in principle remains unknown today.