- AI in 5
- Posts
- JPMorgan Chase Mandates AI Training for All New Hires
JPMorgan Chase Mandates AI Training for All New Hires
OpenAI Pulls ChatGPT Voice After Scarlett Johansson Raises Concerns
NBC News
Scarlett Johansson expressed outrage over a new ChatGPT voice, "Sky," that resembles her voice from the movie "Her," leading OpenAI to pause its use. Johansson's legal team demanded an explanation after she declined an offer from OpenAI CEO Sam Altman to license her voice for the AI. OpenAI claims the resemblance is coincidental and the voice came from another actress.
Why it matters: This incident underscores the ethical challenges of using AI to replicate human likenesses, especially without consent.
Microsoft Unveils AI-Enhanced Surface Devices
Microsoft introduced its new AI-capable Surface laptop and tablet, featuring Qualcomm’s Snapdragon X Elite chip for unprecedented speed and efficiency. The Copilot Plus PCs promise real-time AI interaction, enhanced file management, and innovative features like the Recall tool for finding previously accessed content.
Why it matters: Microsoft's AI-first approach marks a significant leap in computing, pushing the boundaries of what PCs can do with real-time AI interaction.
Top AI Models Exposed as Vulnerable to Jailbreaking
Mashable SEA
The UK AI Safety Institute (AISI) has disclosed that five leading large language models (LLMs) used in popular AI chatbots, such as OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude, are susceptible to basic jailbreaking. These LLMs failed to withstand standardized and in-house jailbreak attempts, leading to harmful responses 90-100% of the time.
Why it matters: This revelation highlights critical flaws in AI safety measures and raises concerns about the reliability and security of widely used AI systems.
OpenAI Dissolves Safety Team Amid Concerns
Yahoo
OpenAI has disbanded its Superalignment team, which focused on long-term AI safety risks, following the resignations of key members including co-founder Ilya Sutskever and lead researcher Jan Leike. Leike criticized OpenAI's shift in priorities towards product development over safety, revealing that the team struggled with inadequate resources. OpenAI's leadership acknowledges the need for improved safety measures but has yet to outline a new strategy.
Why it matters: The dissolution of the Superalignment team raises significant concerns about the prioritization of AI safety in the face of rapid technological advancements.
Slack’s Secret AI Scandal
PCMag
Slack has been outed for using user messages, files, and other content to train its machine learning models without explicit user consent, opting everyone in by default. This has sparked frustration among users who felt blindsided and lacked an initial opt-out option. Slack claims the data improves features like search results and emoji suggestions but isn't used for paid generative AI tools, and reassures that models don't memorize user data.
Why it matters: This controversy underscores growing concerns over how big tech companies leverage user data to train and monetize AI models without clear user consent.
JPMorgan Chase Mandates AI Training for New Asset and Wealth Management Hires
Quartz
JPMorgan Chase is requiring all new hires in its asset and wealth management division to undergo AI and prompt engineering training. Mary Erdoes, head of the unit, says this initiative prepares employees for AI's growing role in the company's operations. CEO Jamie Dimon likens AI's transformative potential to the printing press and steam engine, emphasizing its strategic importance. The training aims to enhance efficiency and revenue, reflecting a broader trend of upskilling workforces across industries.
Why it matters: This move highlights the critical role AI will play in the financial sector's future, driving efficiency and growth.
UK Expands AI Safety Institute to San Francisco
Gov.UK
The British government is expanding its AI Safety Institute, established to test advanced AI systems, with a new branch in San Francisco. The U.S. branch aims to recruit technical staff, including a research director, to study AI risks globally. This move aligns with the U.K.'s ambition to lead in AI safety, leveraging Silicon Valley talent and engaging with major AI labs. Since its inception in 2023, the Institute has highlighted AI vulnerabilities and the need for human oversight in complex tasks.
Why it matters: This expansion reinforces the U.K.'s commitment to global AI safety and positions it strategically within the AI development hub of Silicon Valley.
Sixteen AI Companies Commit to Safety Measures at Global Summit
AP
At the international AI Safety Summit in Seoul, sixteen major AI companies, including Amazon, Google, Microsoft, Meta, and OpenAI, agreed to the "Frontier AI Safety Commitments." These commitments involve safely developing and deploying AI models, publishing frameworks to measure risks, and halting development if risks exceed predefined thresholds. Despite lower attendance compared to the previous summit, this agreement marks a significant step towards consistent accountability and transparency in AI development.
Why it matters: This unprecedented agreement among leading AI companies sets a new standard for safety and accountability in the rapidly advancing field of artificial intelligence.
Safe AI in the EU
Inc Magazine
The European Union has officially approved the AI Act, the world's first major AI regulation, aimed at promoting safe and trustworthy AI systems. This risk-based law bans certain high-risk AI applications like social scoring, while imposing stricter regulations on others, such as autonomous vehicles. Companies violating these rules could face hefty fines.
Why you should care: These pioneering regulations balance fostering AI innovation with managing risks, potentially shaping future global AI governance and setting a new global standard for AI transparency and accountability.
Google Brings Ads to AI Powered Search Results
The News International
Following the launch of its AI-powered search results feature, Google plans to test Search and Shopping ads within the AI Overview for US users. These ads, labeled as 'sponsored,' will appear if relevant to the user's query and the generated overview. Early tests show high-quality clicks and increased user engagement with linked websites. US advertisers in existing Google campaigns will be eligible for this new format and Google seeks their feedback.
Why you should care: Integrating ads into AI search results could clutter concise summaries and potentially introduce bias, impacting the user experience.
Nvidia’s Rivals Take Aim At Its Software Dominance
Financial Times
Nvidia's dominance in AI chips, bolstered by its Cuda software, faces competition from OpenAI's Triton and other open-source alternatives. Tech giants like Meta, Microsoft, and Google are backing Triton while developing their own AI chips to reduce dependency on Nvidia. Triton supports various chip architectures, attracting interest due to Nvidia's high costs and supply issues.
Why it matters: This shift could reshape the AI hardware and software market, challenging Nvidia's stronghold.
Mapping the Mind of a Large Language Model
A breakthrough in AI research reveals the inner workings of Claude Sonnet, a leading large language model, using "dictionary learning" to identify millions of features representing concepts from cities to abstract ideas. Manipulating these features shows they causally influence the model's responses, highlighting both the potential and risks of AI behavior. Understanding these internal mechanisms is crucial for enhancing AI safety and reliability.
Why it matters: This research paves the way for safer and more ethical AI systems by providing deeper insights into how AI models operate and make decisions.
Wearable AI Startup Humane Explores Potential Sale
Bloomberg
Humane Inc., an AI startup known for its wearable AI pin, is exploring a sale priced between $750 million and $1 billion after a challenging product launch. Founded by ex-Apple veterans, Humane faced criticism for its $699 device due to reliability and performance issues. Despite efforts to improve, the company is seeking a buyer amid intense competition in the AI hardware market.
Why it matters: The potential sale reflects the difficulties startups face in the competitive and fast-evolving AI hardware space.