AI: Four big issues surrounding OpenAI's commercialization

In this article:

Concerns around the commercialization of AI models have risen amid headlines of Sam Altman's firing as OpenAI CEO and his potential hiring by Microsoft (MSFT). NYU Professor of Media and Technology W. Russell Neuman outlines the problem areas surrounding OpenAI's model to commercialize its large-language model and certain safety concerns.

"And the third thing is the notion of commercialization of for-profit, not-for-profit. It seems to me we wouldn't have seen what's going on if there wasn't the investment from Microsoft to make it possible," Neuman explains to Yahoo Finance.

Click here to watch the full interview on the Yahoo Finance YouTube page or you can watch this full episode of Yahoo Finance Live here.

This post was written by Luke Carberry Mogan.

Video Transcript

JULIE HYMAN: Amidst all this kerfuffle was some reporting that there was some concern that the commercialization of AI at OpenAI was proceeding too quickly, maybe it was raising some safety concerns.

What do you think is going on here, Russ?

Do you think the industry is being careful enough?

W. RUSSELL NEUMAN: Well, I think we can speculate there were four issues that could have been intertwined in the boardroom dramatics out on 18th Street in San Francisco.

The first is the issue of safety and whether pausing would make any sense.

But my view is that if you sort of stop or slow down, that's not going to make any difference.

You've got to put more energy into finding AI systems that can do the monitoring because humans can't keep up with these large-scale systems.

The second thing is OpenAI was supposed to be open source and it's not clear whether it makes any sense to have an open source program when you have hundred-- I'm sorry, a million points-- 1.7 trillion different parameters.

If you can open that up, nobody can make sense of that.

So if it's-- the concept of open source in this day and age makes a lot less sense.

And the third thing is the notion of commercialization of for-profit/not-for-profit.

It seems to me we wouldn't have seen what's going on if there wasn't the investment from Microsoft to make it possible.

The fourth possible issue that would generate the boardroom drama is simply the personalities.

So my take on it is all four of those potential issues are sort of non-issues because if you generate a company based on selling your access to your foundational AI model and it's not safe, that's not economic either.

So there's a lot of incentive to keep things safe economically.

- And, Russ, what is the biggest near-term risk when it comes to AI that you see?

W. RUSSELL NEUMAN: There was a study at Stanford where they took something like $400 to $500, and with that amount could tune a system and reschedule it, reorganize the system so that preventions that were built into the AI system were no longer at play.

So it's just going to be a Wild West.

We got to get used to this.

If there are malicious players out there, it's very difficult to say, well, my foundational model will prevent that because the tuning of these systems is going to reopen up all those questions all over again.

JULIE HYMAN: Does what has happened over the past few days change this equation at all?

W. RUSSELL NEUMAN: It changes it for the personalities involved in opening AI, but there are-- I'm guessing it's going to be about a dozen foundational models paid for in large part by the large tech players and that all of the action is going to be on finding fine-tuned specialized versions of those foundation models.

And that's good news, that means there's going to be many dozens of players working in these models.

GPT-4 cost $100 million and took 100 days and 25,000 NVIDIA chips to build it.

That means that setting up a foundation model is going to take a big investment.

All the action is going to be on fine-tuning those models for specialized purposes.

Advertisement