Brett and Rob on the latest moves against Big Tech. The Writers Guild strikes a blow against imposed AI. Organized labor versus EVs. The US government brings antitrust suits against Amazon and Google. And a look at what the infamous “Pause AI” petition did and didn’t accomplish.
Analysis complete. No addtional information is required for context. Proceed with transcript display ...
This week on The Futurists, ultimately, the core concern here, as to whether AI will replace humans in the workforce, hinges on how capitalism functions. Since the 1960s, the primary economic driver has been increasing productivity, and the ultimate means to achieve this is to reduce human involvement in labor.
Well, well, well, welcome back, world traveler. Such a pleasure to see you again. I haven't seen you in weeks. Yeah, we haven't done a show together for a while. You know, we had that one with the NYU professors talking about AI regulation, and I ended up having Mark Buckley at my apartment in Bangkok, and we did a live recording there. But we haven't done a show for a few weeks together. I mean, we've been on the road nonstop. We both travel a lot, but you win the prize because you have. Where are you now? Like, tell me what place you're at.
I'm in Kasau in Movia, so I just arrived. Yeah, I literally arrived about an hour and a half ago, came via Istanbul from Bangkok. I'd been in Bangkok for less than 48 hours from my trip previously, which was to Toronto for CBOS, where we did the Future of Money panel there. And then before that, the weekend before that, I was in Riyadh in Saudi. So, yeah, I've been all over the place. You're bouncing around. You know, one of the things you notice when you travel around is when you finally do make it back to the United States, it feels like you're going back in time to the 1930s, doesn't it?
Yeah, no, I mean, that's the thing. You know, having lived in New York as a New Yorker but someone that travels all over, you know, the infrastructure in the US might have been absolutely state-of-the-art in the 1970s, but in an absolute free market, there's no incentive to upgrade all of that infrastructure because most of it's privatized, right? Whereas if you go to places like China, who are older than the United States, or even Thailand where I am now, you've got massive infrastructure projects happening all the time right now. Fast rail, improvement of the electricity grid, all of these things are happening. There's just not the political, um, you know, incentive in the sort of absolute free market economy that the US is that's resulted in the fact that, you know, for many Americans, the quality of life in terms of things like basic access to healthcare and education and so forth has deteriorated over time compared with what's happening offshore.
Yeah, education standards are down. Our transportation infrastructure leaves a lot to be desired. Look how many derailments there are annually. It's a crazy number. Yeah, it's true. Uh, no. In terms of rail, we really do look like an emerging market or a third world country. You know, um, since 2011, China has put in place 38,000 kilometers of fast rail, and we can't even get Los Angeles to San Francisco built in the same time. The big reason for that is our political system has so many checks and balances that it's pretty easy for any one group or even any one politician to stop things in their tracks. And that was by design. So when they started the country in the 1700s, the concern was that the concentration of power, so they built a lot of veto power into each level of government. But the problem now is government's lost its ability to get anything done. People can stop a project. So these big infrastructure projects require community engagement. They require private sector to work with the public sector and so on. It's very hard to build and requires consensus.
Yeah, that's a hard thing to achieve, at least in this country, and it doesn't help when you've got at least one political party that's not even dealing with a full deck of reality. So I thought it'd be fun for us to catch up on the news since I haven't spoken to you in quite some time, and there's a lot of, you know, like, what has happened in the last couple of months has been just Extraordinary. I guess one of the news stories that we covered a while ago, about six months ago, was this call for a pause in AI research. If you remember, and you remember it well because you were one of the 2,000 people who signed it. I did sign on, and I expressed my reservations. In fact, I wrote an article about it; you might recall. The AI pause was signed, anyway. I signed it because it's the best thing we had available right now to think about regulation. But I don't have concerns about AI moving ahead without ethical guardrails, without core regulation. Europe has had a shot at it, China has had a shot at it. But beyond Biden forming a regulatory think tank to look at the potential of creating regulation, the US is, again, looking to the free market to define the alignment of AI to human society. As we know, that has some significant issues because we wouldn't have climate change if we didn't have the same mechanisms in place with respect to fossil fuel usage.
Well, it's very easy to, as you say, break the consensus. Our Congress is gridlocked to begin with; it's almost evenly split in both houses. In addition, nobody wins, and there's no advocate for the future. There's nobody lobbying for the future, but there's plenty of people who are lobbying for the past, and they're well-funded, so they can derail that kind of regulation however they wish.
Now, let's talk about what's occurred in the last six months. The letter went out six months ago. Some people were derisive and said it wouldn't accomplish anything. Other people, like Gary Marcus, wrote a good piece. He was one of the principal people behind that letter. He thought it was effective. He said, "Look, we're already having a regulation discussion, which we might not have had." And Max Tegmark, who was kind of the orchestrator of the letter, an MIT researcher in artificial intelligence, also felt like it was productive.
So in the last six months, here are some of the things that have happened. There's a lot. We now have multimodal LLMs. So now you can communicate, you can talk to the ChatGPT, which really starts to present the interesting possibility of using it like an AI assistant on your phone. That's really new; it just came out. Open-source large language models have proliferated around the globe, meaning that innovation is happening everywhere, and small teams can access those models; they don't have to spend a huge fortune on training them. There have been a flurry of lawsuits ever since we've been covering the topic on this show for a while. But there have been a flurry of lawsuits, principally copyright infringement lawsuits filed here in the US and elsewhere in different governance and different jurisdictions around the world on that topic.
There's been an outcry from creative professionals around the world because their writing, their work, is the subject matter that the LLMs are trained on, including the Hollywood writer strike is the most... We'll get to that. Let's bookmark that because that's a whole other topic. Labor and strikes and because there's a lot of news there. Now, one thing did get paused. I was interested to read about this. OpenAI did actually pause the work on training their latest large language model, which is GPT-5. Now, they say it has nothing to do with the letter; we'll see. I don't know about that. They say the rumor, actually they haven't said this, but the rumor is that they weren't getting the results they wanted, and it was going to cost hundreds of millions of dollars to train. There are some questions about OpenAI's ability to keep burning hundreds of millions of dollars to keep their model ahead of the rest of the pack. I don't think people really understand how labor-intensive training these LLMs is. You know, the fact that you've just got to hire a lot of bodies to churn through a ton of data to feed into the engines. You know, it's a very... It's not that these LLMs go out and look at the internet and learn all of a sudden; it's much, much more deliberate than that, especially if you want some sort of ethical filtering to the data.
Yeah, so on that note, you know, OpenAI has been using workers in Kenya who are paid $2 an hour to do human reinforcement of the training. So actually, there's a whole human component to this. These are not self-taught machines; they're tutored by human beings who aren't paid very well. Now, Sam, well, I think a lot of people, before you continue with that, I think a lot of people sort of misunderstand how machine learning works. You know, because I hear a lot of people talking about AI competing with humans and so forth and framing it in that way. And ultimately, I think that's fair. But to think about these AI as sort of coming up with these concepts independently, that's actually not what's happening. What's happening is it's combing through all of this human language, stuff that we've written, stuff that we've said, and it's condensing that down to these models in terms of how language works. And when you look at things like diagnostics in the medical field, it's looking at all of the work that diagnosticians have done, you know, radar technicians, oncologists, and so forth. It's compressing all of that human knowledge and experience and finding those patterns that enable good diagnosis. And in doing so, you know what it does is it condenses a broad range of human activity and behavior. So essentially, what you get is a condensed version of all the best human application at that task. Now, it's not the same for language processing and chatbot because, as we know, language has a lot of variability to it. But for tasks like that, we are seeing these machines now perform at better rates in fields like diagnosis than humans. And it's not because the machines are better; it's just that it's condensed all of the human knowledge into an accessible framework. Yeah, exactly, like a greatest hits album or a cheat guide, you know, things that humans have been using for a long time to get an advantage or get a leg up. Also useful for drug discovery, useful for translation. I get the people who are concerned about the threat, and I think, as Amir Gabi said when we interviewed him a month ago, he said most of the concerns boil down to people's fear that they're going to be replaced and people's fear that they'll be obsolete. And Brett, my observation there is that's a generational fear. When I talk to younger people, I don't get that impression. People under 30 are embracing these tools, and they welcome them, or they're indifferent to it, but they'll use them. It's older people who feel like, "Well, I've accomplished a lot in my life, and I've mastered this particular domain, and now it's my time to basically ride it out, to use my expertise." And they're feeling threatened; they're feeling they might get displaced. The practical reality is that no one yet has lost their job to any of these tools. I've heard from some people who say, "Well, graphic designers or people on Fiverr, you know, that do job work, they're getting displaced." I don't know if that's actually true or not. But in terms of any professional that I've spoken to, the way people are using these tools is like an accelerator. They're using it like a really smart personal assistant. No, I've talked about the four phases of AI integration, you know, and I frame it this way. The first phase is alignment, where we figure out how AI should fit in society, and that's going to continue for some time. The second phase is the advisory phase, where we use AI to augment advice. This is AI-powered humans or AI-assisted humans, and we do that until such a point where the AI models get good enough to eliminate the human from that advice piece or where we can let AI act on our behalf.
So, this is the agency phase. Now, we're probably not going to get into this agency phase of AI, where we have these AIs autonomously acting on our behalf, for example, until the early part of the next decade. But at that point, it's a lot more binary, right? In that jobs where today humans still have value because of information asymmetry, that is, they know more about a topic than you do, they're the ones at the highest risk from the agency phase because that's really where the AI is going to start to attack. So I don't think right now you can make a case that AI is producing broad technology unemployment, but that certainly won't be the case in the mid-2030s, right? So we've got a period of time to prepare for that. The argument that the market makes, and the argument that the AI Pause didn't make significant progress, it might have ticked a few boxes. But ultimately, my concern, and I've expressed this on the show before, is that the core mechanism here as to whether AI is going to replace humans in the workforce is how capitalism itself works. Since the 1960s, the overriding economic driver has been increasing productivity. The ultimate tool to increase productivity is to remove humans from the labor force in production. And there is nothing in the model of capitalism today that says we should value a human worker over an algorithm. And when we look at why the AI Pause letter didn't produce a wave of rapid regulation and really stop the development of AI, it's because ultimately whether AI is going to be implemented and have an effect on employment is up to the market. It's not up to the government today. Well, it could be; it's a choice. We've been conditioning; it's a policy choice. For about 50 years, we've been conditioning this idea that the government shouldn't intervene in markets. Of course, it does all the time, but it does so in ways that preserve the status quo and give private sector companies free rein to make decisions. Now, it's certainly possible to conceive of a government that does actively intervene in the private sector and in the market, like China and the EU. It's also important for people listening to recognize that the market is not society; the market is a subset of society. But we tend to run the United States as if the market is the whole of society. There are just a lot of social functions that don't really work in a marketplace, and we need to recognize, I think we need to recognize that policy needs to recognize that. Now, this brings up the very adjacent topic, which you teased already, which is strikes. We should probably jump into that after the break. Should we take a little break and then come back and pick it up with the strikes? Neighbor action. Sure, yeah, let's take a break. You're listening to The Futurist with myself, Brett King, and Rob Turk. We'll be right back after this break. [Music] Provoked Media is proud to sponsor, produce, and support The Futurist Podcast. Provoked FM is a global podcast network and content creation company with the world's leading fintech podcast and radio show, Breaking Banks, and, of course, its spin-off podcasts, Breaking Banks Europe, Breaking Banks Asia Pacific, and the Fintech 5. But we also produce the official Finate podcast, Tech on Reg, Emerge Everywhere, the podcast of the Financial Health Network, and NextGen Banker. For information about all our podcasts, go to provoked.fm, or check out Breaking Banks, the world's number one fintech podcast and radio show. All right. Hey, welcome back. You're listening to The Futurists with me, Rob Turk, and my co-host, Brett King, who's coming to us from Malava. So, tell me, what's the weather like in Malava? Oh, it's pretty warm. It's about 29°C today, so, you know, in the 80s. It's funny, though. You know, I've been spending the last few months in Thailand, so I was outside in Thailand the other day, and it was raining, and it was down to about 26°C, and I was like, "Oh, God, I need a jacket. It's cold." Your blood thins out over time, gets conditioned to that constant humidity.
That's definitely true. So, you know, while Sam Altman, the CEO of OpenAI, has been on a non-stop world tour to proactively reach out to governments to talk about regulation, there's a lot of speculation that the kind of regulation he's pushing for is not the kind that will favor workers or those who are threatened by displacement. The speculation is that he's looking to create a licensing regimen that essentially entrenches the leading AI companies. We've seen some trends towards that, by the way, in the last few months. For instance, just last week, Amazon announced that they're going to invest up to $4 billion in Anthropic, which is a competitor of OpenAI. It was founded by a couple of the team from OpenAI. It's an alternative approach; they have a thing called constitutional AI, where you can set some values, some regulations; you can actually incorporate what we think are human values. It's an interesting notion. Amazon is going to make that available to their cloud customers, which makes a good deal of sense. So, you're starting to see the emergence of these little blocks. One block is DeepMind and Google, another block is Microsoft and OpenAI, and another block now is Anthropic and Amazon, and these are the big cloud vendors, the big technology companies teaming up with their pet AI shop, and you'll start to see those things integrated, and that suggests the market dominance is already underway. Now, counteracting that, of course, well, I am reminded of, have you ever watched Star Trek: The Next Generation?
Yes, that was a heck of a transition there. No, no, I'm just saying because, you know, Data often referred to his ethical subroutines, right?
Right, right. And I ultimately think that we need something like that. Now, the EU has taken a two-tier approach to the system: core ethics and then other behaviors that can be... But the core ethics are critical elements of regulation, and I think that makes the most sense for now until we can actually sort of encode a regulatory AI that creates the sort of... But you're talking about coding human values, and the big question there is we can't even agree on human values. That is a political debate, my friend. Holy cow. Maybe we get the Dalai Lama to write the ethics code. I don't know. You know, it's... You know what? Maybe it's for the best that the United States cannot regulate its way out of this problem right now because we're so incapable because of gridlock. Maybe... Well, because that allows the EU and others to take the leadership. We saw that with the GDPR, with the data regulations. What is happening now around the world when you look at CBDCs and crypto, when you look at data protection ordinances and things like that, we are seeing regulation develop globally. Now, because what is happening is regulators have to move quickly, and so instead of reinventing the wheel, they'll take the EU regulation and incorporate that into their localized regulation. So when people talk about global government and things like that, the fact is it's already becoming clear that regulation is becoming globalized 100%. Also, what's the WTO like the World Trade Organization is global government. What WTO can sanction a country? If that's not a global government, I don't know. You may opt into that, but countries want to be a part of that because it's about free trade. You may recall when we interviewed Stefan Lindström from Finland. He pointed out that the American government has outsourced regulation to the EU, and I think that's a great way to put it.
We can't agree on anything here, but it's not by intent, right? It's by default. Yeah, it's by default, right. That's exactly right. But it's kind of funny; the EU regulation is interesting because they have identified a couple of critical sectors. So anything that involves life and death, anything that involves military or national security, there, AI is going to be strictly regulated. But when it comes down to consumer-grade AIs for consumers to talk to, say, like a chat assistant or something, they're going to have very light regulation. So what they're trying to do is they don't want to over-regulate preemptively. Americans are concerned. Well, I mean, I think, right. But I think what you'd see in America, as we see with a lot of regulations is ultimately who is going to be writing regulations for AI in the states, isn't it going to be people like Sam Altman? He's pushing hard for it, right. And I mean, that's who writes regulation on drugs. It's the big pharma who writes regulations on the use of fossil fuels. It's big oil, right? It's like that's... Yeah, that's true. You get as much regulation as you pay for in the United States, unfortunately.
Okay, so let's talk about who's not at the table right now. The people who haven't been represented in this conversation at all are workers, and you've alluded to it a couple of times. Of course, you've written about this eloquently in the past, the notion of technological displacement of workers, and that's a big concern people have had. By the way, it's not new. It's been around for 100 years. Actually, you can go all the way back to the Luddites, I suppose. You can say ever since the Industrial Revolution, workers have been concerned, yeah, concerned about getting displaced. We had this debate in the United States in the 1940s when automation came to factories because in the '20s, workers had a great deal of autonomy in the factory. The management of the factory barely came into the factories. But then they started to bring in machines that could replace humans, and the workers started to do sabotage at that time as well, where they would start to break the machines. In a way, it was kind of reminiscent of the Luddites. So here we are today, and people are worried about AI, and the people who have been leading the charge, weirdly, I think, but in a really kind of valiant way, is the Writers Guild of America, the WGA. These are the screenwriters in New York and Los Angeles, and they've been on strike. They were on strike for nearly 150 days. The strike was just settled yesterday. It's a long time, you know, and Bill caved, and Drew Barrymore caved, you know, to the pressure. They were going to bring back their shows. They didn't. That seemed to be a turning point because that's a lot of solidarity, I think. And the Screen Actors Guild has been right alongside protesting. For those who aren't in Los Angeles or New York, every day, there have been strikers surrounding the movie studios and motion picture companies, protesting quite lively, actually. It's kind of a fresh reminder. At least when I was a kid, there were strikes all the time in the 1970s.
Yes, we had a very vibrant labor movement under President Reagan in the 80s. We kind of broke the spine.
So this is... It wasn't just Reagan. I mean, I talk about this in "Technosocialism." It was also, of course, "The Iron Lady," Margaret Thatcher in the UK that kicked it off. And if you look at what happened, the 1970s was this movement, and Reagan really attacked the unions and collective bargaining. They said the unions were holding the rest of the market at gunpoint and holding them hostage and so forth. But essentially, Thatcher and Reagan won that battle. They reduced the power of collective bargaining and the trade unions overall. The net effect of that was that real wage growth stopped. If you look at real wage growth since the 1980s onward in the United States, in the UK, and Australia, where I'm from, real wage growth has been absent. There's an argument to be made that the legislation that reduced the power of trade unions and collective bargaining is exactly why we haven't had real wage growth in countries like the states.
That's correct. I think there's a lot of merit to that, and, of course, there's also an economic incentive for companies to invest in technology, right? So that's capital investment they can write that off, and it's what they call capital-biased technological change, right? Where the more you invest in technology, the more returns go to capital and not to labor. So that's going, and there are actually hard metrics on this today. If you look at the best Blue Chip companies of the 1960s and 1970s and you look at how the amount of profit per employee that was generated, and you look at organizations like Apple and Facebook and Tesla is not a great example, but these tech companies, they are earning 10 times more per employee, inflation-adjusted, than these 1960s or 1970s based companies. And that's because there are a lot fewer employees. It's like the world's being vaporized, Robert. Well, the other point is that those companies can scale to planetary size, right? That previously was impossible in the era of physical goods and manufacturing. But now they can. Now there's a lot of labor movements happening, so I think people have been inspired by The Writers Guild. By the way, they won. This is really remarkable because even at the time, while I'm a supporter of The Writers Guild, I thought, well, they're doomed. AI is coming, the movie studios are on the defensive, they're getting attacked by big tech, they're gonna have to automate their way out of this problem. But the writers held it in there. They got support from other unions, the team union IATSE, which is the theatrical workers, the Screen Actors, even the Directors Guild for a little while was supporting them. And as a result, they won, and they got almost everything they asked for. They got an increase in residual payments or bonuses based on streaming that they didn't have before. They got guarantees about the minimum number of workers who can be hired on a show so that you can't expect a single writer to do a whole show. That's a big change they never had that before. And perhaps most importantly, they won on the artificial intelligence, which was one of the very contentious points. Now, people get this wrong, so I want to take a second and explain it. Some people say The Writers Guild was against artificial intelligence, and if you looked at the signs that the protesters had, sometimes you got that impression, like they were against ChatGPT. The fact is, the Guild is not against automation. It's not against AI in any way. They just don't want it to be forced on them, which is very similar, by the way, to the complaint of the Luddites in a weird way. It's like a recap of that. They're like, "We'll use AI if we want to use AI, but we don't want someone to force us." And there was one specific point that was most contentious, and that was could a producer use something like ChatGPT to generate a script and then hire a member of the Screenwriters Guild, The Writers Guild, to do a polish on that. And if that was possible, that would actually... The writer only gets paid 25% for doing a polish versus what they get paid for an original script. So that would be a way for them to evade a contractual obligation.
It was like a loophole that could have undercut the value of a screenwriter. They fought very hard and they won on that point. So no AI can be used in literary material, and you cannot use a script AI, but a screenwriter can use AI to generate. That's exactly right, and they're not opposed to it. For instance, there's a lively discussion right now on whether the studios can use writing to train an LLM. That was another contentious issue, and it looks like the studios are gonna win on that, not to replace the writer but to automate things like the descriptions that go into streaming media services or the metadata around the film. It makes a good deal of sense for that to get automated or to use that to write promotional copy.
You know, we're using tools like Opus to take clips from our show, The Futurists, and create short-form videos. It looks at patterns of listener behavior or when people highlight and the cadence of language to choose these segments to automatically generate these short-form clips. It's a pretty powerful tool, and I can see us continuing to use that. It doesn't mean that hosts or guests are being compromised. No one's gonna interview you with Chat GPT on this show anytime soon, but it empowers your social team to reach a diversity of platforms. There are way too many social platforms, so a small team can be more effective. Now, Spotify is using generative AI to take a podcast's voice and produce the show in different languages, with the permission of the podcaster.
Other people are using it for translation, automating marketing materials, and more. Let's talk about other labor action because, of course, the writers' strike was not the only strike we've seen. There have been attempts to strike against Amazon in their warehouses during the pandemic, and now the auto workers in the United States have gone on strike. This is a big deal because the United Auto Workers is one of the biggest unions in the country. They are striking and looking for a 40% pay hike over three years. It's a pretty aggressive demand. Auto companies had a tough time during the pandemic, but they're back and making money now, so it's the right time to ask for it. They've had their biggest profits in over a decade.
There's a simple counterargument to the idea that this will end auto manufacturing in the United States. German automakers can do it, so does that mean American automakers are grossly inefficient compared to German automakers? No one would argue that. The other piece is that Tesla can do it in the United States, but they can do it with high levels of automation. This is going to have to be negotiated as part of the deal. Let's talk about Tesla because they're an important company to watch in this space. While it's reasonable to expect Ford and GM to come to some sort of arrangement with their workers, Tesla is interesting because it has exactly zero union workers. It's an auto company in the United States that doesn't hire any union workers.
How do they do it? Well, they move their plants to places where they're aggressively anti-union. There's no one United States when it comes to this sort of thing because there are several red states that are very anti-union. For instance, South Carolina is a state that welcomes auto manufacturers. They have been doing a great job; they just landed another plant from Audi in the state of South Carolina. They already have Mercedes-Benz, BMW, and Volvo, so they're doing great at attracting European automakers. Part of that is because it's a right-to-work state, so the union doesn't have as much power to shut the place down, which diminishes labor's power to bargain. However, what it does mean is that jobs are created.
The leading proponent of this is Elon Musk and Tesla; they are firmly anti-union. So, as workers try to organize and extract more profit for labor instead of it flowing back to capital, it creates an incentive for corporations to relocate to more favorable jurisdictions. We'll see this happen again, much like what's happening with mass automation. Some jurisdictions will seek to ban or restrict the implementation of artificial intelligence to favor human workers, while others prioritize automation.
The money that you pay workers circulates in the local economy, which helps keep a community thriving. If it's fully automated, the money primarily goes to shareholders and doesn't necessarily stay in the local community, which can lead to community decline. So, it's important to watch this space because labor policy will continue to be a topic of discussion for the next 20 years.
Also, it's the 25th anniversary of Google, and the Department of Justice has given Google not one, but two big presents to commemorate the occasion. They're being sued by the Department of Justice for antitrust in two cases: one for search, highlighting their dominance of search, and another case coming for ad dominance in ad serving. It's challenging for anyone to argue that Google isn't a monopoly, as they overwhelmingly hold a 90% market share.
Google has already been sued for antitrust in the EU and the UK separately, so the US is a bit behind on this issue. US policy on monopolies and antitrust has evolved, and while it might not be a widely discussed topic, it's essential, similar to copyright, which is another noteworthy area of concern.
Antitrust law in the United States has a long history, dating back to the 1890s with the Sherman Antitrust Act. These laws were initially designed to break up big trusts that dominated entire industries. However, these laws have been somewhat dormant over the last 50 years, partly due to the influence of The Chicago School of economists. They introduced the concept of consumer welfare, emphasizing that a monopoly should only be considered harmful if it raises prices and extracts monopoly rents from consumers.
This theory, developed in the 1970s, has been influential in shaping antitrust policies. The last major antitrust case in the United States was against Microsoft in 1999, marking a significant gap until more recent actions.
Lena Khan, the chairwoman of the Federal Trade Commission, has filed a major antitrust lawsuit against Amazon. She presents a new theory of antitrust, focusing on how companies like Amazon can provide services for free or at low prices, effectively restructuring entire markets. While Amazon might not dominate the entire retail market, in e-commerce, its market share is substantial.
Khan's lawsuit argues that Amazon has harmed sellers within its marketplace, shifting the focus from consumer harm to seller harm.
In the discussion, the focus also shifts to healthcare and the high cost of insulin in the United States. While antitrust issues may not apply to this particular problem, it highlights the broader issue of high drug prices in the U.S. Citizens pay more for drugs and healthcare in general than any other country. The conversation extends to broadband and mobile phone services, where Americans often pay more and have fewer provider choices compared to other countries.
It's a great question. The United States was founded by people who were fiercely anti-monopoly. They fought against monopolization; the Boston Tea Party, for example, was a protest against monopolies. Yet, here we are, effectively allowing government-sanctioned monopolies. I find it to be a great paradox. Who knows, Brett, we live in a world of interesting times and change. I think you get as much government as you pay for in this country, and you get as much as you can afford.
Someone was commenting on a post I had commented on the other day, and they made a comment about their constitutional right to gun ownership. I replied, "What about a constitutional right to health, a roof over your head, access to food, clean water, and clean air? Why prioritize guns over those basic needs?" The U.S. economy has seen a deterioration in the affordability of housing, healthcare, and food. Inflation might not be a big deal according to the figures, but many people are questioning why it costs so much for basic items like a six-pack of bananas.
That's right. The cost of living is skyrocketing, with increased costs for mortgages, car leasing, and more. It's not unique to the United States; economies worldwide are facing structural issues. Inequality is a growing problem, largely due to increased automation. Automation leads to the creation of an AI landlord class that owns assets automating large portions of the economy, leaving many with reduced wages to cope with an increasing cost of living.
A surprising statistic is that privately owned vans for van life are the fastest-growing category of housing in the United States. Vans that used to cost $30,000 are now selling for $100,000 because the cost of traditional housing has become unaffordable for many. In the 1960s, the average American could buy a home for a family of four for about three times the average annual salary. Now, in places like New York, it's 19 to 26 times the average salary. Dismantling unions and accelerating automation have degraded labor, affecting the lifestyle and quality of life for a significant portion of the population.
I'm wondering what the future of this is going to be. We'll definitely cover this topic more in the future. Look, I'll leave you with this thought because I know we've got to wrap up, but ultimately, if you think about the application of artificial intelligence, and this is what I think a lot of people miss, when you look at the 1960s and 1970s, the science fiction movies and so forth that depicted robots in society, like Rosie the robot, etc., the plan always was for robots to take away menial human labor, like housework and working in factories. For literally 60 years, technologists have been preparing us that robots are going to take human jobs. That has always been the intent of automation for AI and robotics.
So, ultimately, if AI is successful, which I see no reason why it won't be, we have to start dealing with the fact that human labor is no longer a mechanism for the distribution of wealth in society. We need some other way to do that because the less you can employ human laborers in the workforce, the less effective mechanisms you have for the distribution of wealth in society. This is the big problem that's coming, even if you can maintain human jobs, you're not going to get paid as much because the most valuable companies will be highly automated. It's a philosophical change in the way work relates to society and humanity, and it's a big deal. It's going to take decades to figure it out.
In a way, it's tragic that OpenAI didn't stick with its original mission of being truly open. Their idea was an open alternative to the big tech companies, but that's over because they've taken massive investment from Microsoft, and they're now part of the Microsoft world. They could have imagined a scenario where small companies could compete using open tools, but my sense is that those companies will fall behind if this becomes a battle of scale and a battle of service. The companies with big data centers all over the world will have a strategic advantage, so they'll hasten this issue.
I think the app economy, like what we saw with startups creating value quickly on the app ecosystem, is going to happen for special computing and AI specialization in these areas. There are commercial opportunities, but the broader industrial-level AI and conversational AI built into operating systems and experiences will be dominated by these oligopolies.
Well, that's not the world's brightest note to end one of our episodes on. Generally, you're the optimist here, and I feel like I'm holding up the optimistic end of the bargain. But it's always a great pleasure to see you. Now I owe you a conversation about China because there's a lot of misunderstanding here in the US about what's actually happening on the ground. Since you've been there quite often and you have a good perspective on it, very soon, let's record a show where I can ask you what's the truth about China.
Right, that would be good because this is a conversation that I have frequently online, particularly with Americans, not so much with people in other parts of the world. Let's definitely get into that. It's something that I'm passionate about.
Super! Safe travels to you, and I hope it goes well. I hope you finally get to one time zone and can stay there for a while.
Super fun to see folks. You've been listening to "The Futurists" with my co-host, Brett King, and myself, Rob Turk. We are very grateful for your listening. Thank you for supporting the show. I want to give a shout out to Kevin Hon, who's our engineer, and to Elizeth S, our producer, and the whole team at Provoke Media. Thank you all very much for making the show possible. And a big shout out to those fans of the show who've been sharing it on social media and telling their friends about it. We're growing, and that's because of you. We thank you very much for your support, and we will see you soon. In fact, we'll see you in the future.
Sorry, well, that's it for "The Futurists" this week. If you like the show, we sure hope you did, please subscribe and share it with people in your community. Don't forget to leave us a review, as that really helps other people find the show. You can ping us anytime on Instagram and Twitter at @futuristpodcast for the folks that you'd like to see on the show or the questions that you'd like us to ask. Thanks for joining, and as always, we'll see you in the future.