• AI Trailblazers
  • Posts
  • Navigating AI: Glitches, deepfakes and government intervention.

Navigating AI: Glitches, deepfakes and government intervention.

Welcome to this week’s AI Trailblazers Newsletter and thank you for making us your go-to source for relevant AI business news.

As election season is hitting it’s zenith, deepfakes are on the rise as well seeding even more doubt about the news into the minds of everyone. This is just one of the many issues that Congress is grappling with as it struggles to find a foothold on regulations of AI with over 100 bills lying in wait. Perhaps Congress could take a page from ChatGPT as it proactively began speaking with users last week…becoming an early Halloween gift to some. Discover more below.

Mark your calendars: the AI Trailblazers Growth Summit is happening on October 30th in New York. Purchase your tickets now at the early bird rate, and stay tuned for agenda details. In the meantime, check out highlights from our last summit, and email us if you're interested in partnering.

From Fake News to Fake Reality: Deepfakes Reshape Digital Trust

The latest Ars Technica article reports on the rise of AI-generated deepfakes has ushered in an era of "deep doubt," where the authenticity of media is increasingly questioned. As sophisticated AI tools enable the creation of realistic fake content, it has become easier for individuals to deny real events and discredit legitimate evidence. This phenomenon not only threatens public trust in media but also poses challenges for legal systems and political discourse, as it blurs the line between fact and fiction. The widespread skepticism fueled by deep doubt is reshaping how society interacts with digital information and undermining social trust on a global scale.

  • Emergence of "Deep Doubt": The rise of AI-generated media, especially deepfakes, has led to an era of "deep doubt," where people increasingly question the authenticity of real media. AI tools have enabled individuals to fabricate convincing fake content, which undermines the credibility of legitimate documentary evidence. As a result, the public's skepticism towards digital content is growing, making it easier for liars to deny real events.

  • Weaponization of AI Media: The term "liar’s dividend" describes how deepfakes can be used to discredit authentic evidence. Public figures, including former President Trump, have already leveraged this by dismissing real events as AI fabrications. This tactic threatens to erode trust in traditional media and distort our understanding of current and historical events.

  • Impact on Legal and Political Systems: Legal experts have raised concerns that deepfakes could compromise the authenticity of evidence in court trials. Judges and scholars are grappling with how to handle the challenge of verifying AI-generated content, but no major rule changes have been made yet. This uncertainty may destabilize legal proceedings and political discourse, as false claims become more difficult to disprove.

  • Erosion of Social Trust: Deep doubt contributes to a larger erosion of trust in online interactions and information. The spread of conspiracy theories like "dead Internet theory" reflects the increasing belief that much of online content is fake or AI-generated. To counter this, verifying information through multiple reliable sources and looking for logical inconsistencies is essential, but automated AI detection tools remain unreliable.

AI Trailblazer Takeaways: We have all seen some deepfakes roaming around on social media and some of them are quite good. Deep doubt is a natural consequence as Gen AI has improved to the point where reality is indistinguishable from fiction. Some solutions have been bantered around such as stamping images, but these seem weak at best. As a result, navigating deepfakes will definitely be making the headlines in the short term.

Can Washington Balance AI Regulation with Global Leadership?

As artificial intelligence (AI) advances at a rapid pace, US lawmakers are scrambling to keep up, introducing over 120 AI-related bills in Congress. These bills cover a wide range of concerns, from AI safety and ethical risks to technological innovation and education. An MIT Technology Review article argues that despite bipartisan efforts, challenges such as industry lobbying and ideological divides make passing comprehensive legislation difficult. Yet, progress is being made as Congress works to balance regulation with the need to foster AI growth and maintain global competitiveness.

  • Congress Faces AI Regulation Overload: With over 120 AI-related bills, Congress is trying to regulate a broad range of AI issues, from improving education on AI to addressing AI-driven biological risks. These bills reflect the diverse concerns in the AI space, including data use and ethical risks. However, most bills will never become law due to the complexity of the legislative process.

  • AI Safety and Innovation in Focus: Congress is prioritizing non-binding guidelines and voluntary standards for AI safety to avoid stifling innovation. The Senate and House are making progress on bills that establish the US AI Safety Institute, expand AI education, and combat deepfake pornography. These bills show a bipartisan effort to address immediate AI risks while promoting AI growth.

  • Bipartisan and Ideological Divides: Some issues, like deepfakes, have bipartisan support, while others, like AI bias and inclusivity, remain contentious. Democrats have sponsored two-thirds of the AI bills, focusing on AI safety and fairness. Republicans, however, often push back against regulations they fear may slow US AI progress or give China an edge.

  • Challenges Ahead in AI Legislation: Lobbying by big tech companies influences how AI regulations evolve, with many pushing for voluntary commitments. Though the US is making strides in AI policy, partisan divides and industry pressures create obstacles to comprehensive regulation. Nonetheless, Congress is not "sleeping" on AI and continues to refine its approach.

AI Trailblazer Takeaways: With such revolutionary technologies, government regulation was inevitable. The key will be to strike a reasonable balance in protecting the public interest while not hindering the pace of development. No easy task to be sure.

Spooky Glitch or Future AGI? ChatGPT Starts Chatting First!

A recent incident involving ChatGPT has stirred concerns after the AI unexpectedly initiated conversations with users, deviating from its typical behavior of waiting for prompts. While OpenAI explained the issue as a minor bug, the event has fueled speculation about the potential for artificial general intelligence (AGI) and raised questions about user privacy. As Forbes reports, this situation highlights the fine line AI developers must navigate between innovation and user trust.

  • Speculation arose that this might be a glimpse of artificial general intelligence (AGI), but experts say it’s not. Speak-first interactions can easily be programmed into AI systems without implying advanced intelligence. Such features already exist in mental health chatbots, which regularly check in on users to offer support.

  • The ChatGPT bug raised privacy concerns, as the AI seemed to remember users’ previous conversations. Generative AI often stores user input, which can be accessed and used for further training, a fact many users overlook. OpenAI’s terms of service typically include clauses allowing the use of this data, increasing worries about potential privacy intrusions.

  • While AI systems are generally designed to avoid being overly assertive, this incident has sparked debate about whether speak-first AI could become a trend. Some users might appreciate more proactive AI, while others may find it unsettling. The incident underscores the delicate balance AI developers must strike between innovation and user comfort.

AI Trailblazer Takeaways: This was a bug in the code, not a glitch in the matrix.

Quote of the Week

“Let society and the technology co-evolve, step-by-step, with a very tight feedback loop and course correction, to build systems that deliver tremendous value while meeting safety requirements.”

- Sam Altman, CEO of OpenAI

Partner with AI Trailblazers

Our community of business leaders is growing fast and we are planning more forums. To discuss partnership opportunities that connect our partners with AI decision makers including at our summits and this newsletter, email us. Our AI Trailblazers Growth Summit this October is already shaping up to be something extra special.

What is AI Trailblazers?

AI Trailblazers is a vibrant platform dedicated to uniting corporate marketers, technologists, entrepreneurs, and venture capitalists at the forefront of artificial intelligence (AI). Our mission is to fuel growth, innovation, and career development among our members, who all want to be at the forefront of incorporating artificial intelligence into their businesses and their lives for strategic advantage. More information here.