Who decides what the AI ​​tells you? Campbell Brown, once Meta’s news chief, has thoughts


Campbell Brown has spent her career searching for accurate information, first as a renowned television journalist and then as Facebook’s first and only dedicated news chief. Now, as he watches AI reshape the way people consume information, he sees history threatening to repeat itself. This time, he’s not waiting for someone else to fix it.

Your company, AI Forum – which she recently discussed with TechCrunch’s Tim Fernholz at a StrictlyVC evening in San Francisco – evaluates how foundation models perform on what she calls “high-stakes topics” (geopolitics, mental health, finance, recruiting), topics where “there are no clear yes or no answers, where it’s murky, nuanced and complex.”

The idea is to find the world’s top experts, have them design benchmarks, and then train AI judges to evaluate scale models. For Forum AI’s geopolitical work, Brown recruited Niall Ferguson, Fareed Zakaria, former Secretary of State Tony Blinken, former House Speaker Kevin McCarthy and Anne Neuberger, who led cybersecurity in the Obama administration. The goal is to get AI judges to reach roughly 90% consensus with human experts, a threshold she says Forum AI has been able to reach.

Brown traces the origin of Forum AI, founded 17 months ago in New York, to a specific moment. “I was on Meta when ChatGPT first launched publicly,” he recalled, “and I remember realizing shortly after that this was going to be the funnel through which all the information would flow. And it’s not very good.” The implications for his own children made the moment seem almost existential. “My kids are going to be really stupid if we don’t figure out how to fix this,” he recalled thinking.

What frustrated her the most was that accuracy didn’t seem to be anyone’s priority. The Foundation’s model companies, he said, are “extremely focused on coding and math,” while news and information is more difficult. But more difficult, he argued, does not mean optional.

In fact, when Forum AI began evaluating leading models, the findings weren’t exactly encouraging. He cited Gemini pulling “stories that have nothing to do with China” from Chinese Communist Party websites and noted a left-wing political bias in almost all of the models. More subtle flaws also abound, he said, including lack of context, lack of perspective and stubborn arguments without acknowledgment. “There’s a long way to go,” he said. “But I also think there are some very simple solutions that would greatly improve the results.”

Brown spent years at Facebook observing what happens when a platform optimizes for the wrong thing. “We failed at a lot of the things we tried,” he told Fernholz. The fact-checking program she created no longer exists. The lesson, even if social media has turned a blind eye, is that optimizing for engagement has been terrible for society and left many less informed.

Their hope is that AI can break that cycle. “Right now things could go either way,” he said; Companies could give users what they want, or they could “give people what’s real, honest, and truthful.” He acknowledged that the idealistic version of that – AI optimizing the truth – might seem naïve. But he believes the company may be the unlikely ally in this case. Companies that use AI for credit, lending, insurance, and hiring decisions care about accountability and “will want you to optimize to get it right.”

That enterprise demand is also what Forum AI is betting its business on, although converting interest in compliance into consistent revenue remains a challenge, particularly given that much of today’s market is still satisfied with checkbox audits and standardized benchmarks that Brown considers inadequate.

The compliance landscape, he said, is “a joke.” When New York City passed the first hiring bias law requiring AI audits, the state comptroller found that more than half had violations that went undetected. Real testing, he said, requires experience in the field to work not only on known scenarios but also on extreme cases that “can get you into trouble that people don’t think about.” And that work takes time. “Smart generalists are not going to cut it.”

Brown, whose company raised last fall $3 million directed by Lerer Hippeau, is uniquely positioned to describe the disconnect between the AI ​​industry’s self-image and the reality of most users. “The leaders of big tech companies say, ‘This technology is going to change the world,’ ‘it’s going to put you out of a job,’ ‘it’s going to cure cancer,'” he said. “But for a normal person who just uses a chatbot to ask basic questions, they still get a lot of garbage and wrong answers.”

Trust in AI is at extraordinarily low levels and she believes skepticism is, in many cases, justified. “There’s one conversation going on in Silicon Valley, and a totally different conversation going on among consumers.”

When you purchase through links in our articles, we may earn a small commission. This does not affect our editorial independence.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *