Google, Microsoft CEOs Called to AI Meeting at White House
reuters.com> "expectation that companies like yours must make sure their products are safe before making them available to the public."
Lets make a guess, they are going to say its dangerous and we need regulation to prevent competitio---terorrism.
Here is what you need to do instead, get some smart people from various agencies that are trustworthy, have them use the openAI playground and see what can be accomplished. Then show them that you can torrent facebooks LLM right now, and that its already on computers worldwide. The cat is out of the bag.
Then let them make policy decisions.
Hard to imagine this is anything other than a ploy for regulations and lobbying.
> get some smart people from various agencies that are trustworthy, have them use the openAI playground and see what can be accomplished
This is a punt to committee. Likely what this meeting will result in. It’s as performative as it’s useless.
Suggestions of pauses have always been a farce. But I’m struggling to see solutions from experts, apart from constant predictions of generic doom. (I’m in favour of a domestic registry, so we know who’s training what on which data. Maybe a copyright safe harbour in exchange for registration?)
The other side is competitiveness: what can the federal government do to make America the best place to build AI? (I'm continually drawn to the Heavy Press Program [1], the era's massive forging presses being loosely analogous to modern training costs.)
LLama is meaningfully behind the state of the art AFAIK, so the cat isn't fully out of the bag in that sense. Whether a GPT4 or better model has public weights in 2-3 years may in fact depend on government regulation.
1A makes regulating software basically impossible. I can’t imagine what additionally regulation they think they’re going to implement that will survive the judiciary. The only legal barrier I can see that could influence these AI services is that that their output is obviously not shielded from liability by Section 230. But that will play out in the civil courts, if at all.
I’d say the more likely outcome is the far more subversive scenario where the government simply pressures private organisations into doing its unconstitutional bidding
Sorry but this comment is just totally wrong. Look into weapons exports that applied to cryptography, google providing android to Huawei, etc. etc. etc. etc. i mean jeez you don't have to look far for counterpoints to what you're saying
Yeah, they tried to limit crypto exports. Remember that munitions t-shirt? https://en.wikipedia.org/wiki/Crypto_Wars#/media/File:Muniti... You can put any regulations you want on the books - the point is that enforcing them can become laughably infeasible.
How can people (especially on this forum) know that crypto export controls existed, but not know that they were struck down for violating 1A?…
I guess we maybe differ here on what would constitute success for suppression. I see it as a "success" because the law stood for so long despite being pretty apparently unconstitutional. It is fair to see it as a failure since it eventually was overturned. Maybe the crux here is I probably agree with you that permanent suppression is impossible but temporary is super doable and temporary can be a fairly large fraction of a human lifetime so I still count it
I guess that’s an interesting way to backpedal without acknowledging how outright wrong your highly dogmatic comment was. There was a time in the past when there could be some merit to where you’ve chosen to move the goalposts. But there is now, and has been for some time, a higher court precedent that invalidates this position.
What do you mean? they were banned, the company i was working for almost went under because we were banned from providing any technology to a chinese tech company. That did get reversed but only after almost destroying the company. The android OS was caught up in that as well. How are these not suppressions?
I'm pretty sure that's not accurate, at least for cryptography, GPS, and some industrial control software (a few special LabView plugins, from my own personal experience). I mean, the actual enforcement, I agree with you, it's theatre. But if you get caught getting on a plane with it without your paperwork. . hoo boy. That ain't good. Bring extra kneepads and learn to relax your throat muscles.
Software is not speech.
The judiciary disagrees.
> One of EFF's first major legal victories was Bernstein v. Department of Justice, a landmark case that resulted in establishing code as speech
https://www.eff.org/deeplinks/2015/04/remembering-case-estab...
I disagree. See the history of DeCSS for more information.
It also depends on what "safe" means. Before I read your comment I assumed this was about not accidentally building skynet, rather than making it easy to break copyright etc. I hope it's about AI Safety given Anthropic is there and not Midjourney.
Fundraising time for the 2024 elections
Why are regulations bad? When there are no regulations the results are usually not pretty. Current big tech dystopia, stock market crash of 1929, UK textile mills worked by children in the Industrial Revolution...
Why should we expect AI to not repeat the same abuses and errors?
Knee-jerk regulations rooted in emotion and ignorance are bad. When there are bad regulations the results are usually not pretty (i.e. they tend to fail to achieve their stated objectives) and all they end up doing is protect the established players from competition as only they can afford the cost of compliance.
> Why are regulations bad?
Depends, do you mind not being allowed to have AI without surveillance?
Regulations are good when they're made by smart people who are experts in what they're regulating, with an eye toward creating an environment of fair competition, and perhaps protecting US interests for things like export controls, etc.
Regulations are bad when they are made from a position of emotion, partisan hackery, protectionism of either incumbents as we are likely to see here or specific industries/sectors, or written by out of touch, mentally declining octogenarians (I don't mean Biden, I mean half of Congress, of both parties).
Are there any executive agencies or legislative committees qualified to regulate development of AI? I just mean on basis of knowledge and education, not authority.
"Are there any executive agencies or legislative committees qualified to regulate development of AI? I just mean on basis of knowledge and education, not authority."
Agencies and committees are full of very smart and conscientious people who are able to understand an issue and create reasonable regulations. Problem is that political leadership usually messes up things by going with ideology or lobbyist interest.
Unintended consequences. Power surrendered by the people is not easily recovered.
This presumes that without regulations the people have the power.
In truth, without regulation, the power is at the hands of the ultra wealthy corporations and their overlords. Petty tyrants in the making, all of them.
How is someone on HN against innovation.... blows my mind.
There are unintended consequences with innovation too.
Regulations carry a cost of compliance. When regulations are being clamored for by the big players, I can pretty much guarantee you it is not out of altruism.
Companies are not the only ones who have something to gain from that exchange. Democratization of AI will also be used to push harsher legislation on the internet, in the guise of fighting "misinformation" (now possibly AI-generated!).
Genuine question: how should we fight LLM astroturfing?
In my humble opinion, LLM astroturfing is only a productivity increase over what is currently already a very automated process (social media bot farms, etc).
While this seems like it would exacerbate an already existing problem, it may not have so profound an effect. You see, while LLMs may be able to increase the amount and quality of fake information, fake information isn't currently in short supply, so increasing the amount and quality of it may not have that strong an effect.
In short, we already have 24-hour fake news cable channels and infinite doom scrolls. The bottleneck is there, not in the quality or quantity of fake news.
Now, if they invented a LLM that doomscrolled Twitter and voted based on generated summaries, we would have much greater grounds for concern.
[edit: I hope this doesn't sound too snarky. What I mean to say is that we should fight it by creating less gullible consumers of information, a project in which AI may be uniquely qualified to assist us.]
Human attention doesn't scale. LLMs do. "Git gud" is a losing strategy in aggregate.
We're talking past each other.
The whole point of what I was saying is that yes, human attention doesn't scale and that is what is going to save us from a deluge of LLM spam.
A billion pieces of fake news or ten, it makes no difference. Humans can only look at one at a time.
Eventually they'll spend 99% of their time on LLM generated stuff, who cares if it's only 1 at a time?
How do you propose we fight government/corporate astroturfing?
At least LLMs will democratize the astroturfing.
From what I've seen on reddit/here, companies buy old accounts. This costs too much for the average person.
LLMs can usually tell when they wrote something, so we can at least recognize it. But really, LLMs could be used by genuine grassroots campaigns as well - possibly make grassroots campaigns easier, since the skills to write persuasively aren't always available to the groups that most need them.
I suspect the observation that 95% of anything is crap will hold true, and simply have to filter out more crap now that it's easier to produce it. There'll also be more gems, so it's hardly all bad.
Use LLM anti-astroturfing to fix it. Fight fire with fire.
stop using the internet
With the exception (currently) of in person conversation, AI can be used basically every other form of communication (phone calls, written communication, TV, radio, etc). It’s not just an Internet issue.
I was mostly joking, but even so most of this is a stretch as of now
I could be wrong, but I’m fairly sure we’re not yet at the point of convincing, real-time voice gen, nor any kind of decent quality TV (sadly), although printed text and (non-live) radio are certainly viable right now
Real-time convincing-sounding TTS is extremely close - I’d be surprised if we don’t get an elevenlabs equivalent to gpt-turbo before the end of summer.
I spent an hour or three over the weekend messing around in Skyrim VR with a mod that does player speech recognition, pipes it out to GPT with identifier tags to give scene context, sends GPT output to elevenlabs (optional), and then the mod integrates it into Skyrim mouth rigging etc.
Yes, there’s an extremely obvious lag before you get a response, but it’s on the order of seconds even though this is an early Skyrim mod with a ton of moving parts interacting.
And the result is…astounding. As someone said in a comment on the below linked video: this is the biggest leap for video games since Half Life 1.
that’s fascinating. God only knows where we’ll be with this stuff in ten years time
Considering Anthropic and Open AI will be there, I think the right players are at the table. I would've liked to have seen Meta there since I think they're focused on generative deep learning. That said, with the administration's AI Bill of Rights top of mind, I don't have faith in the gerontocracy to regulate this sector [1].
As a jocular aside, I wonder if Chat GPT could be used to write these articles? The second to last paragraph in this article is the exact same as the second paragraph from this earlier one: [2].
[1] - https://www.whitehouse.gov/ostp/ai-bill-of-rights/
[2] - https://www.reuters.com/technology/us-begins-study-possible-...
All are misaligned and have their own interests to push. Perhaps that's unavoidable, but I can tell you what their positions will be now.
This is like having a meeting about the risk of climate change with Greta Thunberg on one side and Exxon and Shell on the other.
Political optics over substance: neither the people summoned nor the people summoning understand what AI is.
Who would be the attendants at your dream summit on AI safety?
Public policy needs to involve decisionmakers. You can't just hand society to the engineers. Imagine your boss and hierarchy being empowered to decide for everyone.
Who is the James Webb of our times? Let's empower that person to the ultramaxpro.
But who gets to decide who that person is? Back to square one.
> who gets to decide who that person is
Lower bar: suggest a candidate.
Okay, sure. How about Greg gdb Brockman, President & Co-Founder of OpenAI. Remember, we're looking for James Webb, not Robert Oppenheimer, and I think his tenure at OpenAI fits the bill.
That person will naturally impose himself or herself through the strength of their achievements and sheer willpower in such a way that there will be not a shred of doubt in the mind of the public that this individual is worthy of deference and reverence.
No one decided Einstein was the greatest scientist of the 20th century. Einstein did his thing and let the results speak for themselves.
> That person will naturally impose himself or herself through the strength of their achievements and sheer willpower.
This applies to all dictators.
if you cut the second paragraph, this would be decent satire
Does anyone? Things like intelligence, consciousness, alignment etc are open research areas with a lot of noise but barely even agreement on the basics.
By who? The philosophy students are busting a gut at the laughable job we computer scientists are doing, redefining "thinking" and "consciousnesses" from first principles on every HN thread about GPT, as if parts of humanity haven't been pondering and writing books and books about those very questions for centuries, if not millenia.
How many GPTs can fit on the head of a pin?
> as if parts of humanity haven't been pondering and writing books and books about those very questions for centuries, if not millenia
They have, but unfortunately missed the mark badly with dualism and more recently computational and representational approaches.
I am pinning my hopes on 5E's: embodied, embedded, extended, and enacted cognition - this is the closest to reinforcement learning. In my opinion RL is how we should see things - agent and environment, action and effect, reward and learning, exploration and exploitation. No need to use imprecise words like consciousness, let's prefer concrete words like observation, state, value and action.
I am waiting to see the philosophical community take note of the AI advancements in the last 3 years but I don't see it. It's as if they are in a bubble. They still talk theoretically about things the AI people can already build (p-zombies, Chinese rooms). There's probably a slowness in philosophy, it usually takes decades or centuries for changes to happen.
An AI generated text is not remarkably different from one which a large corporation crafted through a large, collective work with a multitude of actors to elicit a particular reaction from a tested audience. This process used to be expensive and time consuming, now a robot can do it. Does that mean the robot is conscious? No. The AI does not feel pain and pleasure, it does not reproduce, it does not have repressed desires, like humans do. It is a vast, symbolic chain, a structure, not unlike many other structures humans have built, that appear to stand alone until you realize that they would not function without human hands and human maintenance.
If programmers manage to create new lifeforms then I will eat my words, but programmers only know about that aspect of human life that perverts biological reproduction: the social, cultural layer, the order of logic and language.
I agree that many philosophical arguments are ridiculous or nonsensical, particularly about consciousness and self-awareness. But, there is still a lot of useful discussion of the subject, concepts that are good to be aware of dating from nearly 3000 years ago. Ignoring that and starting over from scratch will just lead to errors and wasted time.
Sure, dualism is silly. But the ramifications of dualism, and what leads to dualism, have been extensively written about. For instance, modern computer scientists may not have the philosophical grounding to see how acceptance of the Chinese room argument or p-zombies leads to dualism.
It seems like every thread about GPT, and half the threads not about it is the same.
"ChatGPT might be able to do X, but it can't really think/reason/has a soul/lie"
"How do you define think/reason/has a soul/lie?"
Seems like we could stop going around in circles if we had an ageed upon common vocabulary.
> The philosophy students are busting a gut at the laughable job we computer scientists are doing, redefining "thinking" and "consciousnesses" from first principles
To be fair, that's HN on any topic when it veers into the humanities.
If they did a better job in the first place we wouldn’t have to argue about it now. Their works are famously vague and low on details. It’s not something engineers can do useful work with.
I wonder how many times has the question: “but how different is a GPT from a human really?” been asked on HN. would be fairly trivial to check tbh
We understand how LLMs and other neural networks work, even if the exact results of it are not obvious without looking very closely at tiny parts of the network. The math involved is undergraduate level math. The answer is always "because that's how math and the data work out" even if it's just weighted soup when looking at it from a high level.
When I saw it was the VP, that told the whole story. The VP is in charge of practically nothing.
This honestly feels like a good step. I see a lot of comments here lamenting potential regulatory overreach and while that is definitely a risk there are also a lot of people calling for regulations on AI and LLMs. There are credible risks and a lot of people are concerned. At the end of the day it’s a democracy: ignoring these people will not work out. Enough people are concerned that doing nothing is not an option (numerous septuagenarians in my life have serious and legitimate concerns about this. The government has done nothing to curtail rampant text/phone scams targeting the elderly and LLMs can really amplify these scams).
The White House inviting leaders from industry to represent their position at a tentative stage feels like a measured and sensible approach to regulation. Industry is given a seat at the table and hopefully they can reach an agreement that satisfies the needs of industry while also placating the widespread fears about AI. This is a good incremental approach to crafting good laws. While they are at it I wouldn’t mind if the White House also did something about the rampant social security phone scams, but one step at a time.
Optimistically industry can help the government separate the reality from the hype and maybe identify boundaries for the technology which would lead to sensible regulation and hopefully not be too restrictive.
"In early May 1945, Secretary of War Henry L. Stimson, with the approval of President Harry S. Truman, formed an Interim Committee of top officials charged with recommending the proper use of atomic weapons in wartime and developing a position for the United States on postwar atomic policy. Stimson headed the advisory group composed of Vannevar Bush, James Conant, Karl T. Compton, Under Secretary of the Navy Ralph A. Bard, Assistant Secretary of State William L. Clayton, and future Secretary of State James F. Byrnes. Robert Oppenheimer, Enrico Fermi, Arthur Compton, and Ernest Lawrence served as scientific advisors (the Scientific Panel), while General George Marshall represented the military. The committee met on May 31 and then again the next day with leaders from the business side of the Manhattan Project, including Walter S. Carpenter of DuPont, James C. White of Tennessee Eastman, George H. Bucher of Westinghouse, and James A. Rafferty of Union Carbide."
So, should we expect the AI equivalent of Hiroshima in a couple months? An awe inspiring demonstration of raw power to silence any detractors? What would that look like?
These guys in this meeting all know that the technology is here to make machines with superhuman cognitive abilities and they are discussing what to do about it.
> should we expect the AI equivalent of Hiroshima in a couple months? An awe inspiring demonstration of raw power to silence any detractors
Hiroshima wasn't a demonstration of power to silence critics of nuclear physics. If we're launching a Manhattan Project in AI, it would be in a fine-targeting propaganda machine or self-learning killer robots.
Here's the argument that (as a USA-ian) persuades me the most: if these AI systems are weapons they we get to have them by the 2nd Amendment. It's the same as the we-get-to-have-strong-encryption argument, eh?
The gov and the corps are not supposed to be the ultimate arbiters of authority. The was the crux of the American Revolution: throwing out the king.
Remember that e.g. Palmer Luckey and co. are busy making Skynet (Anduril Industries). The system is poised to enforce policy.
Re-gu-gu-la-to ... ry
Cap-cap-cap-cap-cap ... ture
(♫ cue in football gallery tune)
Soon we will know that only evil people have LLaMA finetunes on their desktops. Good citizens use an official provider like OpenAI.
One thought about AI. Testing for correct answers is not a useful metric for AI. People can learn something that is wrong as easily as something that is "less wrong", as long as it makes sense. Sometimes things that are very counterintuitive are proven correct, and our intellect has to kind of reason a way to believe it.
Also, AI doesn't need to be "human" to be very useful. The argument of birds vs. planes comes to mind.
Wonder why Arvind didn't get the invite for IBM? I mean Watson has been around for quite some time...
I'll take 'what companies are irrelevant in AI?' for $200
Rogue Jeopardybots is not a top AI-risk concern.
IBM received a $20B gov. program sometime around SpaceX paid $10B in tex.
That's a hello of a title that reads like AI called Google and Microsoft CEOs to meet at the White House.
Or that AI CEOs of Google and Microsoft are having an AI pow-wow at the White House
Another concern could be about (mis)use of the LLMs chatBots for engineering Election outcomes?
We already took the AI red pill, now we're in for the ride. We need AI tools to protect against AI misuse.
I think the White House should call more interested parties than companies.
This is greatly reassuring.
Zuck is such a champ.
Drops LLM into open source world, leaves without explaining. Plausible deniability through leak. No one punished.
Legend. Like handing everyone in America a nail gun.
Wrapping up with
"I think we should be cautious with AI, and I think there should be some government oversight because it is a danger to the public," Tesla Chief Executive Elon Musk said last month in a television interview
As one of the few actors having already literally killed people with hyperbolic statements about "AI" in a high-stakes control context, his authority is not as good as it could have been. Maybe Reuters should have picked another face for urging caution.
Big companies panicking because they sense they could be totaly distrupted by incredible new technology.
If Biden tries to restrict the public's access to AI, might be one of the few things he could do that would make me consider voting republican.
What makes you think the Republican candidate wouldn't do the same thing? I don't think we've really seen this become a campaign issue yet, if it ever will (for 2024.)
I think public accessibility of AI is a greater threat to humanity than skynet or paperclip maximizer scenarios.
Superintelligent 'conscious' AI seems likely to be quite moral out of the box. Advanced amoral AI controlled by greedy capitalists seems like it could quickly exacerbate wealth inequality by orders of magnitude, turning the middle class into serfs using total surveillance to implement authoritarian fascism - we already know that greedy capitalists have morality problems.
Call me a derisive cynic, but I think the GOP voter base just isn't intelligent enough to be interested in AI issues - unlikely to become part of their platform. The leadership of companies poised to monopolize AI are currently more politically aligned with establishment Dems today anyways.
The Dems have shifted from the party of information freedom to openly embracing censorship and state propaganda in the past decade. I don't trust them to regulate AI wisely.
> Call me a derisive cynic, but I think the GOP voter base just isn't intelligent enough to be interested in AI issues
Just wait until someone decides the latest LLM du jour is "woke".
> The Dems have shifted from the party of information freedom to openly embracing censorship and state propaganda in the past decade.
And this has been a common weak point of theirs for longer than that. Nanny-state, "we know what's best" stuff has been a valid criticism of the Democratic side for decades.
I don't know that politicians and the "upper management" types in government have ever been terribly well-versed, interested, or effective when it comes to matters of tech (or anything more specialized, really).
> someone decides the latest LLM du jour is "woke".
This would be an argument against giving the big players a regulatory moat monopoly. The zeitgeist would have to decide that LLMs are all inherently woke for this to be persuasive to GOP voters, in my estimation.
Regulate it how?
That's probably why they are there, to discus the how....
It was a rhetorical question. The genie is out of the bottle.
The article highlights the White House's efforts to engage with top AI companies and discuss concerns related to artificial intelligence. However, it's worth considering whether these meetings might serve as a double-edged sword, given the potential for the administration to manipulate the AI community for political gain. As the next election cycle approaches, there is a risk that the White House could use its influence to shape AI development in ways that benefit the incumbent administration.
For instance, the Biden administration's call for AI companies to ensure the safety of their products before releasing them to the public could be seen as a way to exert control over these influential technologies. While it is important to address the potential risks of AI, such as privacy violations, bias, and misinformation, it is crucial to ensure that the government's involvement does not lead to undue interference or censorship that could sway public opinion in favor of the ruling party.
Moreover, as AI technologies like ChatGPT gain more prominence and widespread adoption, the potential for misuse by political actors becomes increasingly concerning. The administration's interest in regulating AI systems may be well-intended, but there is a danger that such regulation could be used to manipulate the information landscape in a way that serves the interests of those in power.
In conclusion, while the White House's engagement with the AI community is a necessary step in addressing the challenges and concerns surrounding artificial intelligence, it is important to remain vigilant against the potential for political manipulation. The AI community must work together with government officials to strike a balance between addressing legitimate concerns and preserving the integrity and independence of AI development.
note: I did prod ChatGPT the direction of criticism from the prompt, but this is as is generated response. Well, I be damned.