A “Macroneconomic” Revolution?

Anatole Kaletsky
 .Macroeconomics



LONDON – Next month will mark the tenth anniversary of the global financial crisis, which began on August 9, 2007, when Banque National de Paris announced that the value of several of its funds, containing what were supposedly the safest possible US mortgage bonds, had evaporated. From that fateful day, the advanced capitalist world has experienced its longest period of economic stagnation since the decade that began with the 1929 Wall Street crash and ended with the outbreak of World War II ten years later.
 
A few weeks ago, at the Rencontres Économiques conference in Aix-en-Provence, I was asked if anything could have been done to avert the “lost decade” of economic underperformance since the crisis. At a session entitled “Have we run out of economic policies?” my co-panelists showed that we have not. They provided many examples of policies that could have improved output growth, employment, financial stability, and income distribution.
 
That allowed me to address the question I find most interesting: Given the abundance of useful ideas, why have so few of the policies that might have ameliorated economic conditions and alleviated public resentment been implemented since the crisis?
 
The first obstacle has been the ideology of market fundamentalism. Since the early 1980s, politics has been dominated by the dogma that markets are always right and government economic intervention is almost always wrong. This doctrine took hold with the monetarist counter-revolution against Keynesian economics that resulted from the inflationary crises of the 1970s. It inspired the Thatcher-Reagan political revolution, which in turn helped to propel a 25-year economic boom from 1982 onward.
 
But market fundamentalism also inspired dangerous intellectual fallacies: that financial markets are always rational and efficient; that central banks must simply target inflation and not concern themselves with financial stability and unemployment; that the only legitimate role of fiscal policy is to balance budgets, not stabilize economic growth. Even as these fallacies blew up market-fundamentalist economics after 2007, market-fundamentalist politics survived, preventing an adequate policy response to the crisis.
 
That should not be surprising. Market fundamentalism was not just an intellectual fashion.

Powerful political interests motivated the revolution in economic thinking of the 1970s. The supposedly scientific evidence that government economic intervention is almost always counter-productive legitimized an enormous shift in the distribution of wealth, from industrial workers to the owners and managers of financial capital, and of power, from organized labor to business interests. The Polish economist Michal Kalecki, a co-inventor of Keynesian economics (and a distant relative of mine), predicted this politically motivated ideological reversal with uncanny accuracy back in 1943:
 
“The assumption that a government will maintain full employment in a capitalist economy if it knows how to do it is fallacious. Under a regime of permanent full employment, ‘the sack’ would cease to play its role as a disciplinary measure, leading to government-induced pre-election booms. The workers would get out of hand and the captains of industry would be anxious ‘to teach them a lesson.’ A powerful bloc is likely to be formed between big business and rentier interests, and they would probably find more than one economist to declare that the situation was manifestly unsound.”
 
The economist who declared that government policies to maintain full employment were “manifestly unsound” was Milton Friedman. And the market-fundamentalist revolution that he helped to lead against Keynesian economics lasted for 30 years. But, just as Keynesianism was discredited by the inflationary crises of the 1970s, market fundamentalism succumbed to its own internal contradictions in the deflationary crisis of 2007.
 
A specific contradiction of market fundamentalism suggests another reason for income stagnation and the recent upsurge of populist sentiment. Economists believe that policies that increase national income, such as free trade and deregulation, are always socially beneficial, regardless of how these higher incomes are distributed. This belief is based on a principle called “Pareto optimality,” which assumes that the people who gain higher incomes can always compensate the losers. Therefore, any policy that increases aggregate income must be good for society, because it can make some people richer without leaving anyone worse off.
 
But what if the compensation assumed by economists in theory does not happen in practice?
 
What if market-fundamentalist politics specifically prohibits the income redistribution or regional, industrial, and education subsidies that could compensate those who suffer from free trade and labor-market “flexibility”? In that case, Pareto optimality is not socially optimal at all. Instead, policies that intensify competition, whether in trade, labor markets, or domestic production, may be socially destructive and politically explosive.
 
This highlights yet another reason for the failure of economic policy since 2007. The dominant ideology of government non-intervention naturally intensifies resistance to change among the losers from globalization and technology, and creates overwhelming problems in sequencing economic reforms. To succeed, monetary, fiscal, and structural policies must be implemented together, in a logical and mutually reinforcing order. But if market fundamentalism blocks expansionary macroeconomic policies and prevents redistributive taxation or public spending, populist resistance to trade, labor-market deregulation, and pension reform is bound to intensify. Conversely, if populist opposition makes structural reforms impossible, this encourages conservative resistance to expansionary macroeconomics.
 
Suppose, on the other hand, that the “progressive” economics of full employment and redistribution could be combined with the “conservative” economics of free trade and labor-market liberalization. Both macroeconomic and structural policies would then be easier to justify politically – and much more likely to succeed.
 
Could this be about to happen in Europe? France’s new president, Emmanuel Macron, based his election campaign on a synthesis of “right-wing” labor reforms and a “left-wing” easing of fiscal and monetary conditions – and his ideas are gaining support in Germany and among European Union policymakers. If “Macroneconomics” – the attempt to combine conservative structural policies with progressive macroeconomics – succeeds in replacing the market fundamentalism that failed in 2007, the lost decade of economic stagnation could soon be over – at least for Europe.
 
 


Review & Outlook

The Wolves of Long Island

The SEC does its job in taking down a pump-and-dump stock scheme.

By The Editorial Board






















Photo: Getty Images


Government prosecutors love to hunt for big cases on Wall Street, which are great for publicity, but an often-neglected duty is to protect small investors on Main Street. So it’s noteworthy that the Securities and Exchange Commission recently blew up a Long Island-based boiler-room scheme.

Last week the SEC charged 13 individuals with using high-pressure sales tactics to bilk more than 100 small investors—including many senior citizens—out of some $10 million in savings. The U.S. Attorney for the Eastern District of New York followed with criminal charges.

According to the SEC, the alleged swindlers—many of whom had been banned from the brokerage industry—set up a sham financial-services company on Long Island. They used this “boiler room” to buy, trade and inflate the price of penny stocks in microcap companies. Penny stocks trade over the counter at less than $5 a share and are often thinly traded, which makes them easier to manipulate than blue-chip securities on the big exchanges.

The alleged scammers purchased large blocks of shares and then “pumped” them to potential investors with harassing emails and phone calls. One conspirator emailed: “We are not brokers but a financial RESEARCH firm who continues to produce winner after winner in an UP or DOWN market! I need to speak with you about how this program will impact your portfolio in a POSITIVE way. BUT I cant help you if I cant speak with you.” Another said the stocks were “guaranteed winners” and the “buy of a lifetime.”

The boiler boys also used “washes”—e.g., buying 100 shares of a stock at $5 a share with one co-conspirator while selling the same number of shares for the same price to another. This drove up the stock price and created the impression that shares were actively being traded though ownership wasn’t changing hands. Once prices hit a certain level, they dumped the stocks, realizing $14 million in gains while costing victims millions.

When a victim complained about his losses, one con artist allegedly told him: “I am tired of hearing from you. Do you have any rope at home? If so tie a knot and hang yourself or get a gun and blow your head off.”

The SEC was established during the New Deal to protect mom-and-shop investors from fraudsters. But the agency has lately become better known for its headline-making prosecutions of Wall Street barons for white-collar crimes like insider trading, though their alleged victims are often other well-heeled and sophisticated investors. The agency also won plaudits from progressives for rules requiring disclosures on executive pay. But the SEC was asleep when Bernie Madoff and Allen Stanford were bilking investors out of their life savings.

New SEC Chairman Jay Clayton told us last week that he plans to make protecting small investors from scams like the Long Island boiler room a priority, and the agency deserves credit for bringing these cases back to the center of SEC enforcement.


10 Years Gone

by: Eric Parnell, CFA


- Then as it was, then again it will be.

- It was exactly ten years ago today that the U.S. stock market reached its pre-crisis peak.

- While the course may change sometimes, the song remains the same.
 
“Then as it was, then again it will be
And though the course may change sometimes
Rivers always reach the sea”
- Ten Years Gone, Led Zeppelin, 1975
 
 
Then as it was exactly ten years ago today. The date was July 19, 2007, and the U.S. stock market closed at another new all-time high. The world was awash with liquidity, risk asset prices were steadily rising, and investor worries were few despite evidence of accumulating risks beneath the global financial system. What subsequently followed over the next 18 months was not only the reemergence of downside risk but the near collapse of the global financial system. Could such a fate possibly be again ten years on?
.

The Last Time Rivers Reached The Sea

The skies were once clear and all was supposedly well. Ten years have gone since July 19, 2007.
 
What is so special about this date in particular? It was the date that the U.S. stock market as measured by the S&P 500 Index (SPY) reached its peak before the beginning of the end that soon came to be known as the financial crisis.
 
Sure, the S&P 500 Index (IVV) closed marginally higher a few months later on October 9, 2007, but by then the die had already been cast, volatility was spiking, and the game was over. When reflecting on the financial crisis and the years that have followed since the deluge of central bank liquidity tamed its flames, the date that I use to mark the beginning of the end is July 19, 2007.
 
I still remember to this day the mood across capital markets at the time. The summer was hot and stock investors were brimming with optimism. Corporate earnings were steadily rising on both an operating and GAAP basis at the same time that stock valuations were still fairly reasonable in the 16 to 17 times trailing 12-month earnings range. And while 2007 Q1 GDP had been somewhat soft, it was set to rebound to above a +3% annualized rate in the second quarter of the year.

At the same time, inflationary pressures were largely contained at around 2% despite the fact that commodities prices (DJP) including energy (USO) were on the rise. Moreover, the U.S. Federal Reserve was contemplating an easing of monetary policy in order to address some of the pressures that were accumulating in the economy associated with the housing industry. Strong economic growth, low inflation, rising earnings, reasonable valuations, and a potential shift toward more accommodative monetary policy from the U.S. Federal Reserve in the months ahead.
 
Put simply, what was not to love about investing in risk assets including U.S. stocks (DIA) ten years ago? The subsequent surrender of more than half of total portfolio value over the next twenty months provided the answer.
 
The Course May Change Sometimes
“Blind stars of fortune, each have several rays
On the wings of maybe, down in birds of prey
Kind of makes me feel sometimes, didn't have to grow
But as the eagle leaves the nest, it's got so far to go”
- Ten Years Gone, Led Zeppelin, 1975
 
So where are we today, exactly ten years gone from this previous stock market peak? The S&P 500 Index is once again set to close at a new all-time high on July 19, 2017 at a level on Wednesday that is nearly +60% higher than it was ten years ago on a price basis alone. For many investors, it has left them with the understandable feeling that they need not grow from this prior near catastrophic experience. Simply stay the course, endure the short-term volatility that may occur along the way no matter how dramatic, and drift on the rising tide that is the U.S. stock market over long-term periods of time.
 
If only investors across much of the rest of the developed and emerging world had been so fortunate as their U.S. counterparts over the past ten years. It also stands to reason whether the U.S. stock market eagle has finished its journey or whether it still flies on the wings of maybe after so many years.
 
 
So where do we stand today? The U.S. stock market is soaring just as it was ten years ago.
 
 
 
 
Of course, just because the U.S. stock market peaked exactly ten years ago does not mean it is about to do the same today. A number of factors are completely different today than they were ten years ago.
 
Let’s begin with a few indicators on the positive side.
 
First, whereas volatility had been steadily on the rise from historical lows for many months prior to the July 2007 peak, volatility remains at historic lows today.
 
 
 
This leads to an important point, which is that we will likely need to see a sustained rise in stock price volatility before the S&P 500 Index reaches a final peak. This point was also true leading up to the tech bubble peak back in 2000.
 
Also, whereas the U.S. Treasury yield curve as measured by the 2/10 spread had already inverted and was starting to steepen again leading up to the stock market peak in July 2007, the yield curve is still flattening and has yet to invert today.
 
 
While this suggests the stock market eagle has further to go today, it should be noted that the yield curve was still moving toward flattening and further into inversion at the time when the U.S. stock market first peaked during the tech bubble in March 2000. Nonetheless, the fact that the 2/10 spread is still in the 75 to 100 basis-point range suggests time for further flattening may lie ahead.
 
Of course, some characteristics of today’s market pale in comparison to July 2007.
 
The first relates to valuations. For while stocks were trading at around 16 to 17 times trailing earnings back in July 2007, today they are trading in the 22 to 25 times earnings range. While some seek to explain today’s valuation premium away by citing historically low interest rates, it is worth noting that equity risk premiums based on the 10-Year Treasury yield are still at best comparable. Moreover, short-term interest rates are on the rise thanks to the Fed, and according to many (myself not included) are headed higher across the yield curve.
 
The second relates to monetary policy. Whereas the Fed was headed toward easing monetary conditions in July 2007, it is moving assertively toward tightening monetary conditions today.
 
Of course, overall monetary conditions remain increasingly easy thanks to the continued liquidity pumping from the European Central Bank and the Bank of Japan, but this is also expected to measurably subside, not expand, in the near term.

The third relates to the economy. Prior to the onset of the financial crisis, U.S. economic growth as measured by real GDP was simply more robust in the 2% to 4% range versus the chronic 1% to 2% growth that we have been experiencing for so many years today. Put more simply, the markets in 2007 still had a decent engine, whereas today it has already been sputtering on liquidity and fumes for years now.
 
Overall, the course is different today than it was ten years ago. But just as it was, then again it will be that fundamentals will eventually matter as rivers always reach the sea eventually. So too will accumulating systemic risks. This does not appear likely to begin on July 20, 2017 or even over the next couple of months for that matter. But what history from ten years ago reminds us is that it does not take long for the market tides to suddenly turn, for a world awash in liquidity to suddenly turn bone dry, and for the unexpected to turn into the traumatic for those that are not prepared.

Holdin’ On, Ten Years Gone

On a closing note, one thing that has not changed over the past ten years is the blind eyes of central bankers. Read the following and manage your portfolio downside risk accordingly.
"We believe the effect of the troubles in the subprime sector on the broader housing market will be limited and we do not expect significant spillovers from the subprime market to the rest of the economy or to the financial system"
- U.S. Federal Reserve Chair Ben Bernanke, May 17, 2007
 
“Would I say there will never, ever be another financial crisis? ... Probably that would be going too far. But I do think we're much safer, and I hope that it will not be in our lifetimes, and I don't believe it will be”
 
- U.S. Federal Reserve Chair Janet Yellen, June 27, 2017
 
Oh no, you didn’t! For those investors that are sleeping well at night depending on the omniscience of global central bankers, beware.
 
The Bottom Line
 
It has been ten years gone since the last stock market peak and the very beginning of the onset of the financial crisis in the U.S. stock markets. While the course has changed dramatically in the ten years since, and it is likely that today’s market still has time to grow, in many respects the song remains the same. Enjoy the good times while they last, for capital market history has shown that liquidity fuel love inevitably gives way to reality and how things are ultimately meant to be.


The Future of Artificial Intelligence: Why the Hype Has Outrun Reality
.
071417_futureofAI                       

Robots that serve dinner, self-driving cars and drone-taxis could be fun and hugely profitable.

But don’t hold your breath. They are likely much further off than the hype suggests.

A panel of experts at the recent 2017 Wharton Global Forum in Hong Kong outlined their views on the future for artificial intelligence (AI), robots, drones, other tech advances and how it all might affect employment in the future. The upshot was to deflate some of the hype, while noting the threats ahead posed to certain jobs.

Their comments came in a panel session titled, “Engineering the Future of Business,” with Wharton Dean Geoffrey Garrett moderating and speakers Pascale Fung, a professor of electronic and computer engineering at Hong Kong University of Science and Technology; Vijay Kumar, dean of engineering at the University of Pennsylvania, and Nicolas Aguzin, Asian-Pacific chairman and CEO for J.P.Morgan.  

Kicking things off, Garrett asked: How big and disruptive is the self-driving car movement?

It turns out that so much of what appears in mainstream media about self-driving cars being just around the corner is very much overstated, said Kumar. Fully autonomous cars are many years away, in his view.

One of Kumar’s key points: Often there are two sides to high-tech advancements. One side gets a lot of media attention — advances in computing power, software and the like. Here, progress is quick — new apps, new companies and new products sprout up daily. However, the other, often-overlooked side deeply affects many projects — those where the virtual world must connect with the physical or mechanical world in new ways, noted Kumar, who is also a professor of mechanical engineering at Penn. Progress in that realm comes more slowly.

At some point, all of that software in autonomous cars meets a hard pavement. In that world, as with other robot applications, progress comes by moving from “data to information to knowledge.” A fundamental problem is that most observers do not realize just how vast an amount of data is needed to operate in the physical world — ever-increasing amounts, or, as Kumar calls it — “exponential” amounts. While it’s understood today that “big data” is important, the amounts required for many physical operations are far larger than “big data” implies. The limitations on acquiring such vast amounts of data severely throttle back the speed of advancement for many kinds of projects, he suggested.

In other words, many optimistic articles about autonomous vehicles overlook the fact that it will take many years to get enough data to make fully self-driving cars work at a large scale — not just a couple of years.

Getting enough data to be 90% accurate “is difficult enough,” noted Kumar. Some object-recognition software today “is 90% accurate, you go to Facebook, there are just so many faces — [but there is] 90% accuracy” in identification. Still, even at 90% “your computer-vision colleagues would tell you ‘that’s dumb’…. But to get from 90% accuracy to 99% accuracy requires a lot more data” — exponentially more data. “And then to get from 99% accuracy to 99.9% accuracy, guess what? That needs even more data.” He compares the exponentially rising data needs to a graph that resembles a hockey stick, with a sudden, sharply rising slope.

The problem when it comes to autonomous vehicles, as other analysts have noted, is that 90% or even 99% accuracy is simply not good enough when human lives are at stake.

Exponentially More Data

“To have exponentially more data to get all of the … cases right, is extremely hard,” Kumar said.

“And that’s why I think self-driving cars, which involve taking actions based on data, are extremely hard [to perfect]…. “Yes, it’s a great concept, and yes, we’re making major strides, but … to solve it to the point that we feel absolutely comfortable — it will take a long time.”

So why is one left with the impression from reading mainstream media that self-driving cars are just around the corner?

To explain his view of what is happening in the media, Kumar cited remarks by former Fed chairman Alan Greenspan, who famously said there was “irrational exuberance” in the stock market not long before the crash of the huge tech stock bubble in the early 2000s. Kumar suggested a similar kind of exaggeration is true for today for self-driving cars. “That’s where the irrational exuberance comes in. It’s a technology that is almost there, but it’s going to take a long time to finally assimilate.”

Garrett pointed out that Tesla head Elon Musk claims all of the technology to allow new cars to drive themselves already exists (though not necessarily without a human aboard to take over in an emergency) and that the main problem is “human acceptance of the technology.”

Kumar said he could not disagree more. “Elon Musk will also tell you that batteries are improving and getting better and better. Actually, it’s the same battery that existed five or 10 years ago.” What is different is that batteries have become smaller and less expensive, “because more of us are buying batteries. But fundamentally it’s the same thing.”

Progress has been slow elsewhere, too. In the “physical domain,” Kumar explained, not much has changed when it comes to energy and power, either. “You look at electric motors, it’s World War II technology. So, on the physical side we are not making the same progress we are on the information side. And guess what? In the U.S., 2% of all of electricity consumption is through data centers. If you really want that much more data, if you want to confront the hockey stick, you are going to burn a lot of power just getting the data centers to work. I think at some point it gets harder and harder and harder….”


Similar constraints apply to drone technology he said. “Here’s a simple fact. To fly a drone requires about 200 watts per kilo. So, if you want to lift a 75-kilo individual into the air, that’s a lot of power. Where are you going to get the batteries to do that?” The only power source with enough “power density” to lift such heavy payloads is fossil fuels. “You could get small jet turbines to power drones. But to have electric power and motors and batteries to power drones that can lift people in the air — I think this is a pipe dream.”

That is not to say one “can’t do interesting things with drones, but whatever you do — you have to think of payloads that are commensurate what you want to do.”

In other areas, like electric cars, progress is moving along smartly and Kumar says there is lots of potential. “The Chinese have shown that, they are leading the world. The number of electric cars in China on an annual basis that are being produced is three times that of the U.S…. I do think electric cars are here to stay, but I’m not so sure about drones using electric power.”

Picking up on Kumar’s theme, Fung, who also helps run the Human Language Technology Center at her university, outlined some of the limits of artificial intelligence (AI) in the foreseeable future, where again the hype often outruns reality. While AI may perform many impressive and valuable tasks, once again physical limitations remain almost fixed.

“… A deep-learning algorithm that than can do just speech recognition, which is translating what you are saying, has to be trained on millions of hours of data “and uses huge data farms,” Fung noted.

And while a deep-learning network might have hundreds of thousands of neurons, the human brain has trillions. Humans, for the time being, are much more energy-efficient. They can work “all day on a tiny slice of pizza,” she joked.

The Human Brain Conundrum

This led to the panelists to note a second underappreciated divide: the scope of projects that AI can currently master. Kumar pointed out that tasks like translation are relatively narrow. We have “figured out how to go from data to information to some extent, though … with deep learning it’s very hard even to do that. To go from information to knowledge? We have no clue. We don’t know how the human brain works…. It’s going to be a long time before we build machines with the kind of intelligence we associate with humans.”

Not long ago, Kumar noted, IBM’s supercomputer Watson could not even play tic tac toe with a five-year-old. Now it beats humans at Jeopardy!. But that speedy progress can blind us to the fact that computers today can best handle only narrow tasks or “point solutions. When you look at generalizing across the many things that humans do — that’s very hard to do.”

Still, the stage is being set for bigger things down the road. To date, getting those narrow tasks that have been automated have required humans to “learn how to communicate with machines,” and not always successfully, as frustration with call centers and often Apple’s Siri suggests, noted Fung.

Today, the effort is to reverse the teacher and pupil relationship so that, instead, machines begin to learn to communicate with humans. The “research and development, and application of AI algorithms and machines that will work for us,” cater to us, is underway, Fung said.

“They will understand our meaning, our emotion, our personality, our affect and all that.” The goal is for AI to account for the “different layers” of human-to-human communication.

“We look at each other, we engage each other’s emotion and intent,” said Fung, who is among the leaders worldwide in efforts to make machines communicate better with humans. “We use body language. It’s not just words. “That’s why we prefer face-to-face meetings, and we prefer even Skype to just talking on the phone.”

Fung referenced an article she wrote for Scientific American, about the need to teach robots to understand and mimic human emotion. “Basically, it is making machines that understand our feelings and intent, more than just what we say, and respond to us in a more human way.”

Such “affective computing” means machines will ultimately show “affect recognition” picked up from our voices, texts, facial expressions and body language. Future “human-robot communication must have that layer of communication.” But capturing intent as well as emotion is an extremely difficult challenge, Fung added. “Natural language is very hard to understand by machines — and by humans. We often misunderstand each other.”

So where might all this lead when it comes to the future of jobs?

Machines Are Still ‘Dumb’

“In the near future, no one needs to worry because machines are pretty dumb….” Kumar said. As an example, Fung explained that she could make a robot today capable of doing some simple household chores, but, “it’s still cheaper for me to do it, or to teach my kids or my husband to do it. So, for the near future there are tons of jobs where it would be too expensive to replace them with machines. Fifty to 100 years from now, that’s likely to change, just as today’s world is different from 50 years ago.”

But even as new tech arrives it is not always clear what the effect will be ultimately. For example, after the banking industry first introduced automatic teller machines [ATMs], instead of having fewer tellers “we had more tellers,” noted Aguzin. ATMs made it “cheaper to have a branch, and then we had more branches, and therefore we had more tellers in the end.”

On the other hand, introducing blockchain technology as a ledger system into banking will likely eliminate the need for a third-party to double-check the accounting. Anything requiring reconciliation can be done instantly, with no need for confirmation, Aguzin added. Eventually the cost of doing a transaction will be “like sending an email, it will be like zero … without any possibility of confusion, there’s no cost. Imagine if you apply that to trade finance, etc.”

Already, Aguzin’s bank is about to automate 1.7 million processes this year currently being done manually. “And those are not the lowest-level, manual types of jobs — it’s somewhere in the middle.” In an early foray in affective computing, his bank is working on software that will be able to sense what a client is feeling and their purpose when they call in for service. “It’s not perfect yet, but you can get a pretty good sense of how they are feeling, whether they want to complain or are they just going to check a balance? Are they going to do x, y — so you save a lot of time.”

Still, said he remains confident that new jobs will be created in the wake of new technologies, as was the case following ATMs. His view about the future of jobs and automation is not as “catastrophic” as some analysts’. “I am a bit concerned about the speed of change, which may cause us to be careful, but … there will be new things coming out. I tend to have a bit more positive view of the future.”

Fung reminded the audience that that even in fintech, progress will be throttled by the available data.

“In certain areas, you have a lot of data, in others you don’t.” Financial executives have told Fung that they have huge databases, but in her experience, it often is not nearly large enough to accomplish many of their goals.

Kumar concedes that today we are creating more jobs for robots than humans, a cause for concern for the future of jobs for humans. But he also calls himself a “pathological optimist” on the jobs issue. AI and robotics will work best in “applications where they work with humans.” Echoing Fung, he added that “it’s going to take a long time before we build machines with the kind of intelligence associated with humans. When it comes to going from “information to knowledge, we have no clue. We don’t know how the human brain works.”

Security at the Top — and Bottom

Picking up on Fung’s point that many lower-skill level jobs likely will be preserved, Kumar added that the jobs most likely to be eliminated could surprise people. “What is the one thing that computers are really good at? They are good at taking exams. So, this expectation of, oh, I got a 4.0 from this very well-known university, I will have a job in the future — this is not true.” At the same time, for robots “cleaning up a room after your three-year old is just very, very hard. Serving dinner is very, very hard. Cleaning up after dinner is even harder. I think those jobs are secure.”

The panel’s consensus: The jobs safest from robot replacement will be those at the top and the bottom, not those in the middle.

What about many years down the road, when robots become advanced enough and cheap enough to take over more and more human activities. What’s to become of human work?

For one thing, Fung said, there will be a lot more AI engineers “and people who have to regulate machines, maintain machines, and somehow design them until the machines can reproduce themselves.”

But also, many jobs will begin to adapt to the new world. Suppose, for example, at some point in the distant future many restaurants have robot servers and waiters. People will “pay a lot more money to go to a restaurant where the chef is a human and the waiter is a human,” Fung said “So human labor would then become very valuable.”

She added that many people might “become artists and chefs, and performing artists, because you still want to listen to a concert performed by humans, don’t you, rather than just robots playing a concerto for you. And you will still want to read a novel written by a human even though it’s no different from a novel written by a machine someday. You still appreciate that human touch.”

What’s more, creativity already is becoming increasingly important, Fung notes. So, it’s not whether AI engineers or business people will be calling the shots in the future. “It’s really creative people versus non-creative people. There is more and more demand for creative people.” Already, it appears more difficult for engineering students “to compete with the best compared to the old days.”

In the past, for engineers, a good academic record guaranteed a good job. Today, tech companies interview applicants in “so many different areas,” Fung added. They look beyond technical skills.

They look for creativity. “I think the engineers have to learn more non-engineering skills, and then the non-engineers will be learning more of the engineering skills, including scientific thinking, including some coding….”

Kumar agrees. Today, all Penn engineering students take business courses. “The idea of a well-rounded graduate, the idea of liberal education today, I think includes engineering and includes business, right? The thing I worry about is what happens to the anthropologist, the English majors, the history majors … I think those disciplines will come under a lot of pressure.”