- AI in 5
- Posts
- Microsoft Now Calls OpenAI a Competitor
Microsoft Now Calls OpenAI a Competitor
Friend: An AI Necklace to Combat Loneliness
Avi Schiffmann, known for his COVID-19 tracking site, has introduced Friend, a $99 AI necklace aimed at reducing loneliness. Set to ship in January 2025, Friend connects via Bluetooth to your smartphone, offering continuous listening and proactive messaging to foster emotional connections. Unlike productivity-focused AI tools, Friend is designed solely as a companion, responding to conversations and providing support through its physical presence. Despite mixed success in the AI hardware market, Schiffmann has secured $2.5 million in funding for the project.
Why it matters: Friend represents a significant step towards integrating AI into daily emotional support, addressing the growing issue of loneliness.
Google’s DeepMind Compact Gemma 2 2B Outperforms Larger AI Models
DeepMind has launched Gemma 2 2B, a 2.6 billion parameter AI model that outshines larger models like GPT-3.5 and Mixtral-8x7B in chatbot performance rankings. It also surpasses its predecessor, Gemma 1 2B, by over 10% in benchmarks. Alongside Gemma 2 2B, DeepMind introduced ShieldGemma, a safety content classifier, and Gemma Scope, a model interpretability tool, emphasizing safety and transparency. Gemma 2 2B's efficiency, achieved through learning from larger models, marks a shift towards optimizing AI rather than expanding its size.
Why it matters: This trend towards smaller, more efficient AI models addresses computational and environmental concerns, paving the way for sustainable AI development.
OpenAI's Commitment to US AI Safety
OpenAI CEO Sam Altman announced on X (formerly Twitter) that OpenAI will grant the US AI Safety Institute early access to its next AI model. This move follows a similar commitment made to the UK's AI Safety Institute and aims to counteract perceptions that OpenAI prioritizes product launches over safety. Despite disbanding its superintelligent safety team and facing criticism from former employees, OpenAI is dedicating 20% of its computing resources to safety and has removed non-disparagement clauses to encourage whistleblowing.
Why it matters: OpenAI's early access grant to the US AI Safety Institute highlights its commitment to safety over rapid product launches amid internal and external criticisms.
Meta’s AI Studio Lets Users Build Chatbots Without Code
Meta has launched AI Studio in the US, enabling users to create custom chatbots without coding. By adjusting parameters like avatar tone and personality, users can design chatbots for various purposes, from travel advice to personalized affirmations. These chatbots can be integrated into popular apps like WhatsApp, Messenger, and Instagram. Inspired by Character AI, Meta aims to make creating AI models accessible and engaging for everyone.
Why it matters: Meta's AI Studio democratizes AI development, allowing anyone to create and interact with custom AI chatbots, even without coding knowledge.
Perplexity to Share Ad Revenue with Publishers
Perplexity, an AI search engine startup, announced a new “Publishers' Program” that will share ad revenue with content partners like Time, the Texas Tribune, and Fortune. This initiative follows complaints from publishers about content being used without attribution. The program promises a share of ad earnings when their content appears in Perplexity's responses, aiming to provide a new revenue source for struggling news outlets.
Why it matters: This move could create a sustainable revenue stream for news publishers, potentially reshaping the relationship between AI platforms and content creators.
Apple Delays Release of AI Features for iPhone
Apple's long-awaited AI capabilities for the iPhone 16 will not be available at launch, with Apple Intelligence rolling out in phases starting weeks after the phone's September release. Some features, like an AI-enabled Siri, may not arrive until next year. This phased rollout aims to address a 10% slump in iPhone sales and includes adapting AI for the Chinese market.
Why it matters: Apple's cautious approach to releasing AI features aims to avoid the pitfalls experienced by competitors, ensuring a polished and reliable user experience.
Canva Expands AI Capabilities with Leonardo.ai Acquisition
Canva has acquired AI content design startup Leonardo.ai, bringing its 19M users and 120 employees into the fold. While financial details remain undisclosed, Canva co-founder Cameron Adams confirmed the deal includes cash and stock. Canva plans to integrate Leonardo's technology into its Magic Studio tools or introduce new AI features powered by Leonardo’s models to enhance its platform.
Why it matters: This acquisition strengthens Canva’s position in the AI-driven design market, offering users more advanced and collaborative content creation tools.
Apple Finally Commits to Safe AI
Apple has signed the White House’s voluntary commitment to developing trustworthy AI as it integrates its new AI platform, Apple Intelligence, into its core products. Joining 15 other tech companies like Amazon, Google, and Microsoft, Apple agrees to test AI systems for security flaws, develop labeling systems, and work on unreleased models in secure environments. Unlike the EU’s legally binding AI Law, these safeguards are voluntary.
Why it matters: Apple's commitment to the White House's voluntary AI safety measures underscores its dedication to trustworthy AI development amid industry-wide efforts for self-regulation.
US Government Re-Launches AI Safety Tool
The National Institute of Standards and Technology (NIST) has re-launched Dioptra, an open-source tool designed to assess and monitor AI risks, especially those involving poisoned training data. Dioptra provides a benchmark for companies to test AI models against simulated threats in a "red-teaming" environment and supports government agencies and small to mid-sized businesses in evaluating AI performance claims. This re-launch coincides with the UK's similar tool, Inspect, as both countries collaborate on advanced AI model testing.
Why it matters: NIST's re-launch of Dioptra enhances AI risk assessment and monitoring, providing crucial tools for testing models against threats, supporting both government agencies and smaller businesses.
Musk’s privacy blunder revealed
X users discovered that Elon Musk had—without telling them—implemented a change that meant their data would be collected and used to train its AI model, Grok, by default. EU privacy watchdog, the Data Protection Commission (DPC), is “seeking clarity” from Musk as this is a clear violation of GDPR rules, which require companies to ask for consent before they use personal data. The DPC is “surprised” by this move from Musk as they’ve been “engaging with X on this matter for a number of months,” and although they haven’t received a response from Musk yet, they do expect one early this week.
Why it matters: This incident underscores the ongoing tension between tech innovation and regulatory frameworks aimed at protecting user privacy.
JPMorgan Chase unveils LLM Suite for research and productivity
JPMorgan Chase has introduced LLM Suite, a generative AI product designed to function as a research analyst, now available to 50,000 employees in the bank’s asset and wealth management division. LLM Suite assists with writing, idea generation, and document summarization through access to third-party models and complements other apps like Connect Coach and SpectrumGPT. Executives highlighted its role as a general-purpose productivity tool while addressing concerns about potential inaccuracies, though they have not disclosed if LLM Suite has faced such issues. CEO Jamie Dimon emphasized AI's transformative potential, with President Daniel Pinto estimating the technology's value to the bank at $1 billion to $1.5 billion.
Why it matters: This deployment represents one of Wall Street's largest uses of AI, highlighting the sector's increasing reliance on advanced technologies to enhance productivity and manage sensitive information securely.
Runway faces backlash for scraping YouTube videos to train AI
AI startup Runway has come under fire for reportedly using thousands of scraped YouTube videos to train its video generation model, Gen-3 Alpha, raising ethical and legal concerns. An investigation revealed that the dataset includes content from popular creators and major news outlets, as well as piracy sites. The AI model's ability to mimic specific YouTubers' styles suggests extensive training on their content, which YouTube states violates its terms of service.
Why it matters: This incident underscores the ethical and legal challenges AI companies face when using publicly available content for training models without explicit permission.
New AI text-to-image model FLUX 1 outshines competitors
Black Forest Labs has launched its FLUX 1 text-to-image model, quickly surpassing rivals like Midjourney 6.0, DALL-E 3 HD, and Stable Diffusion 3-Ultra in image detail, scene complexity, and prompt adherence. Developed by former Stability AI engineers and backed by notable investors, FLUX 1 utilizes advanced training techniques, including rotary positional embeddings and a parallel diffusion transformer, for improved speed and accuracy. Black Forest Labs plans to release a state-of-the-art text-to-video model soon, posing a challenge to other AI companies.
Why it matters: This leap in AI capabilities could significantly shift the competitive landscape in the text-to-image and video generation market.
EU’s AI law in force now
The European Union's AI Act, a risk-based legislation to ensure safe and trustworthy AI systems, is officially in force. High-risk AI, such as critical infrastructure and biometric identification systems, will face stringent regulations, while minimal-risk AI, like chatbots, will face fewer restrictions. AI systems using biometric data for crime forecasting or cognitive behavioral manipulation are banned. Tech companies have 3-6 months to comply or face fines up to $38M or 7% of global turnover.
Why it matters: This legislation significantly impacts global tech companies, especially in the US, potentially delaying AI system launches due to the EU's regulatory environment.
Microsoft says OpenAI is now a competitor in AI and search
Microsoft has recently listed OpenAI, its long-time strategic partner, as a competitor in its annual report. This significant shift comes shortly after OpenAI introduced a prototype of its new search engine, SearchGPT. Despite the competition, both companies acknowledge the potential for rivalry within their partnership. Incidents like the brief ousting and reinstatement of OpenAI CEO Sam Altman and Microsoft's ongoing AI expansion efforts further highlight the complex dynamics between the two tech giants.
Why it matters: This development underscores the increasingly blurred lines in the tech industry, where partnerships can quickly evolve into competitive rivalries.
Argentina will use AI to ‘predict future crimes’
Argentina's government, led by President Javier Milei, has announced the creation of the Artificial Intelligence Applied to Security Unit, which will use machine-learning algorithms to analyze historical crime data and predict future criminal activities (e.g. like the movie Minority Report). This initiative includes deploying facial recognition technology, monitoring social media, and analyzing real-time security camera footage. While aimed at identifying potential threats, human rights organizations have raised concerns about privacy and the potential misuse of technology. Amnesty International and other groups warn that large-scale surveillance could infringe on freedoms and be used to scrutinize specific societal groups.
Why it matters: The initiative's potential impact on privacy and civil liberties highlights the tension between advancing security measures and protecting human rights.