๐ง๐ต๐ฒ ๐๐ฒ๐ฎ๐ฟ ๐ถ๐ ๐ฎ๐ฌ๐ฏ๐ฑ. ๐ช๐ต๐ฎ๐ ๐ฑ๐ผ๐ฒ๐ ๐ฎ ๐ต๐ผ๐ฝ๐ฒ๐ณ๐๐น, ๐๐-๐ฑ๐ฟ๐ถ๐๐ฒ๐ป ๐ณ๐๐๐๐ฟ๐ฒ ๐น๐ผ๐ผ๐ธ ๐น๐ถ๐ธ๐ฒ? According to two hypothetical scenarios from the Foresight Institute, it could go one of two ways: ๐๐๐ ๐๐ค๐ค๐ก ๐ผ๐ ๐๐ค๐ง๐ก๐ We achieve progress by building powerful AI tools that amplify human judgment, not replace it. Think safety through control. ๐๐๐ ๐/๐๐๐ ๐๐ค๐ง๐ก๐ We build a resilient society by accelerating decentralized and democratic tech, creating a future based on distribution of power, not its concentration.
Learn MoreGPT-5 launched last week, sparking a major question: ๐๐ด ๐๐ ๐ฑ๐ณ๐ฐ๐จ๐ณ๐ฆ๐ด๐ด ๐ด๐ต๐ช๐ญ๐ญ ๐ฐ๐ฏ ๐ข๐ฏ ๐ฆ๐น๐ฑ๐ฐ๐ฏ๐ฆ๐ฏ๐ต๐ช๐ข๐ญ ๐ค๐ถ๐ณ๐ท๐ฆ? While the initial public reaction might be a resounding "๐ง๐จ," the data on complex software tasks tells a different storyโฆ This ๐ก๐ข๐๐๐๐ง ๐ฉ๐ซ๐จ๐ ๐ซ๐๐ฌ๐ฌ is a critical concern for the safe development of advanced AI. ๐๐ณ๐ฆ ๐ธ๐ฆ ๐ฑ๐ณ๐ฆ๐ฑ๐ข๐ณ๐ฆ๐ฅ ๐ง๐ฐ๐ณ ๐ธ๐ฉ๐ข๐ต'๐ด ๐ค๐ฐ๐ฎ๐ช๐ฏ๐จ ๐ช๐ฏ ๐ต๐ฉ๐ฆ ๐ฏ๐ฆ๐น๐ต ๐ค๐ฐ๐ถ๐ฑ๐ญ๐ฆ ๐ฐ๐ง ๐บ๐ฆ๐ข๐ณ๐ด?
Learn MoreOur latest monthly newsletter announces our official rebranding from "AI Safety Cape Town" to "AI Safety South Africa". Our north star mission has always been simple: โTo foster a lively community of intelligent, compassionate, and high agency people around the topic of responsible AI use.โ Our national rebranding efforts enable us to achieve this on a larger, more impactful scale. Welcome to AI Safety South Africa!!
Learn MoreInside this newsletter: Information regarding upcoming our AI Safety courses starting 30th June. A data-driven counterpoint to the AI job automation hype, arguing that due to current AI limitations weโre currently in a phase of โhuman-AI augmentationโ, but acknowledges the real risk of a future automation tipping point. A recap of an AISCT community event, โUnderstanding the Promise and Peril of AI Progress: A Roundtable Discussionโ, where founder Leo Hyams & community member Charl Botha were both guest speakers. Offers recommended substacks for further reading.
Learn MoreThis newsletter offers resources & actionable suggestions to help navigate this uncertain future. Maintaining agency in an AI-driven world means developing skills that emphasize cultivating taste, long-term thinking & human connection. It also flags emerging risks that show how fast AI capabilities are evolving.
Learn MoreAs AI systems grow more capable, we're approaching a fundamental rupture:โ What happens when human labor is no longer essential for economic productivity?โ This essay explores the implications of that shiftโfrom collapsing labor-capital contracts to rising economic disempowerment, and what it means for the future of human agency.โ โThe real danger of a post-labor world isnโt that it fails, but that it worksโฆ without us.โ
Learn MoreWeโre excited to officially announce that we will be running the Introduction to Cooperative AI fellowship in collaboration with the Cooperative AI Foundation. Cooperative AI is an emerging research field focused on improving the cooperative intelligence of advanced AI for the benefit of all. This includes both addressing risks of advanced AI related to multi-agent interactions and realizing the potential of advanced AI to enhance human cooperation.Through this course, youโll understand the fundamentals of Cooperative AI and its relevance for AI Safety. See the post for more details - apply before the 26th of March, 2025!
Learn MoreThe post discusses the critical debate over the risks of centralizing versus decentralizing AI development, highlighting the trade-offs involved. Centralized AI can concentrate power but may allow for better accountability, while decentralized, open-source AI fosters broader innovation but risks misuse by bad actors. A recent Reuters report reveals that China has been using open-source AI models like Meta's Llama for military applications, including surveillance and intelligence. This raises concerns about the unintended consequences of open-sourcing powerful AI technologies, as they can strengthen authoritarian regimes and exacerbate geopolitical tensions.
Learn MoreThis edition features a recap of our co-founders' participation in the ARENA program, where they gained insights on AI Safety, coding fundamentals, and worked on vision interpretability. The experience highlighted the need for organizational capacity in the AI safety field. In our reading group, we discussed "GSM-Symbolic," analyzing LLMs' reasoning limitations, showing LLMs' struggles with irrelevant information but hinting at potential improvements with scale. Weโre also expanding our team for community events, with upcoming gatherings on November.
Learn MoreLeo Hyams, co-founder of AI Safety Cape Town (AISCT), introduces the organization, which focuses on ensuring AI systems benefit humanity. AISCT believes South Africa has untapped talent for AI safety and has built a growing community, a research lab, and international partnerships. AISCT has participated in events like Deep Learning IndabaX 2024 and successfully ran a fellowship with AI Safety Sweden. Their current research evaluates large language models for traits that may harm human agency, and they host biweekly AI safety reading groups. Upcoming plans include a fellowship with the European Network for AI Safety and a Cape Town retreat for young professionals exploring AI safety careers. Readers are invited to engage with the organization and stay updated.
Learn More