- AI in 5
- Posts
- OpenAI Unveils SearchGPT
OpenAI Unveils SearchGPT
Mistral Unveils Large 2, Challenging Meta and OpenAI
Mistral has launched its new AI model, Large 2, boasting 123 billion parameters and competing closely with Meta’s Llama 3.1 and OpenAI’s GPT-4o. Large 2 excels in code generation and mathematical tasks, offering similar performance with a third of the parameters. Additionally, it supports multiple languages, reduces hallucinations, and is accessible to researchers for free.
Why it matters: Mistral’s innovative approach positions it as a significant contender in the AI landscape, balancing performance, accessibility, and cost.
Try it out HERE
OpenAI Unveils SearchGPT
OpenAI has launched SearchGPT, a prototype AI search engine designed to deliver real-time, summarized information with source attributions. Integrating with ChatGPT, SearchGPT pulls data from various online sources, including images and text, to provide comprehensive answers. Partnering with publishers like The Atlantic, OpenAI aims to respect journalistic content while enhancing search capabilities. A waitlist has been opened for users interested in testing this new tool, signaling OpenAI's move to compete with Google and other AI-powered search tools.
Why it matters: SearchGPT could reshape the search engine landscape by offering precise, source-cited responses and integrating advanced AI within search functionalities.
Reddit Blocks Most Search Engines, Leaving Google as the Primary Exception
Reddit has updated its robots.txt file to block most search engines, like Bing and DuckDuckGo, from displaying recent content, leaving Google as the main exception. This move aims to prevent commercial entities from scraping Reddit's content without adhering to its terms, though it raises concerns about further entrenching Google's dominance. While Reddit invites "good-faith actors" to collaborate, some search engine CEOs have reported difficulties in getting responses.
Why it matters: This decision could impact search engine diversity and access to Reddit content, emphasizing the growing influence of major corporations on internet openness.
Scientists Warn of AI Degradation from Self-Generated Data
Research shows that training AI models on synthetic data generated by other AIs can lead to "model collapse," where errors accumulate and degrade the models' performance. This occurs as mistakes are amplified over successive generations, resulting in irrelevant or incorrect outputs. Despite the appeal of synthetic data due to limited human-made material, this study reveals that such practices can quickly lead to severe errors and reduced data variance.
Why it matters: Relying on synthetic data for AI training risks significant degradation in model quality, emphasizing the need for human-generated data to ensure accurate and diverse AI outputs.
Meta Launches Largest Open-Source AI Model, Llama 3.1 405B
Meta has released Llama 3.1 405B, its largest open-source AI model with 405 billion parameters, designed to compete with models from OpenAI and Anthropic. Trained on a refined dataset of 15 trillion tokens using 16,000 Nvidia H100 GPUs, this model excels in various text-based tasks but requires substantial hardware for efficient use.
Why it matters: Llama 3.1 405B's open-source release empowers developers with cutting-edge AI capabilities, promoting innovation and competition in the AI landscape.
OpenAI Introduces Rule-Based Rewards for Safer AI Models
OpenAI has launched a new method called Rule-Based Rewards (RBRs) to improve the safety of its AI models by using AI to align model behavior with safety standards without human intervention. Previously, human feedback on AI responses was time-consuming, costly, and subjective. RBRs allow safety teams to set specific rules for models, making the alignment process more efficient and objective. Testing shows that RBR-trained models better adhere to safety protocols and reduce inappropriate refusals compared to human-led feedback models.
Why it matters: RBRs streamline AI model training, enhancing safety and efficiency, but careful design is needed to avoid introducing biases and ensure fairness.
FTC Probes AI-Driven 'Surveillance Pricing' in Top Firms
The FTC is investigating eight companies, including Mastercard, Revionics, JPMorgan Chase, Accenture, and McKinsey & Co, for using AI to adjust prices based on customer behavior and characteristics, a practice known as "surveillance pricing." The inquiry seeks details on data collection, customer demographics, and the impact of these pricing strategies.
Why it matters: Surveillance pricing leverages personal consumer data to manipulate pricing, raising ethical concerns and potentially reshaping how consumers purchase goods and services.
NVIDIA's New China-Specific AI Chip, B20, Set for 2025
NVIDIA is developing the B20, a China-specific version of its Blackwell AI chip, to navigate stringent US export controls. The chip will be produced and distributed in partnership with Inspur, targeting a release in the second quarter of 2025. This move follows increased sales of the H20 chip in China, despite existing trade restrictions, and aims to maintain NVIDIA's market presence against competitors like Huawei. The anticipated US export control review in October could prohibit H20 sales in China, making the B20 a strategic pivot.
Why it matters: NVIDIA's B20 chip strategy is a crucial response to evolving US-China trade dynamics, aiming to secure its foothold in a competitive market.
Mistral's NeMo Challenges GPT-4o Mini with Multilingual Edge
French AI start-up Mistral, in collaboration with NVIDIA, has launched the open-source, lightweight language model NeMo, poised to rival OpenAI’s GPT-4o mini. NeMo offers cost-efficient reasoning, world knowledge, and coding accuracy by using less hardware than larger models, though it scored 68% on the MMLU benchmark compared to GPT-4o mini’s 82%. Despite this, NeMo excels in multilingual capabilities, supporting over 100 languages, making it highly versatile for global applications.
Why it matters: NeMo democratizes powerful AI, offering a cost-effective, multilingual alternative to larger models, thus expanding access to advanced language technology.
xAI's New Supercomputer Begins Training to Outperform All Others
Elon Musk's xAI has launched what it claims is the world's largest AI supercomputer at its Memphis site, aiming to develop the most powerful AI by year-end. Powered by 100,000 Nvidia H100 chips, the supercluster is speculated to reach 2.5 exaflops, potentially more than twice as fast as the current leading US Department of Energy's Frontier computer. The 785,000-square-foot facility, a $6 billion investment, faces criticism for its energy demands, equating to the power needs of 100,000 homes. Musk plans to use this facility for training advanced AI models, including a new version of xAI’s Grok, and for innovative projects for Tesla and SpaceX.
Why it matters: xAI's supercomputer could redefine AI capabilities and efficiency, influencing future AI development and competitive dynamics in the tech industry.
AI Powers Realistic Player Creation in New NCAA Football Game
EA Sports utilized AI to tackle the immense task of creating 3-D renderings of 11,000 NCAA football players for its first game in 11 years. Leveraging AI tools developed over four years, the team transformed photos into detailed avatars within a tight three-month window. Developers manually refined any inaccuracies, feeding improvements back into the model for future precision.
Why it matters: AI is revolutionizing game development by enabling rapid, detailed content creation, thus transforming industry workflows and sparking debates about the future of creative jobs.