The post François Chollet: AGI progress is accelerating towards 2030, symbolic models will reshape machine learning, and coding agents are revolutionizing automationThe post François Chollet: AGI progress is accelerating towards 2030, symbolic models will reshape machine learning, and coding agents are revolutionizing automation

François Chollet: AGI progress is accelerating towards 2030, symbolic models will reshape machine learning, and coding agents are revolutionizing automation

For feedback or concerns regarding this content, please contact us at [email protected]


New AGI lab aims to revolutionize machine learning with symbolic models, moving beyond traditional deep learning.

Key Takeaways

  • AGI progress is expected to accelerate, with significant developments anticipated around 2030.
  • The new AGI research lab, NDA, aims to create a fundamentally different branch of machine learning from deep learning.
  • Symbolic models could provide more efficient and generalizable solutions compared to traditional parametric models.
  • AI and machine learning are expected to evolve towards optimality, moving away from current technological stacks.
  • Coding agents succeed due to the verifiable reward signals offered by code, enabling automation in formal domains.
  • The progress of reasoning models in non-verifiable domains like essay writing will be slow due to reliance on costly human-annotated data.
  • Code-based training environments have significantly advanced AI capabilities in programming.
  • AGI requires a model that can efficiently learn and adapt to new tasks with minimal data, similar to human learning.
  • We are on a trajectory to automate economically useful work before achieving true AGI.
  • Building AGI on top of current LLMs is seen as inefficient and not optimal for future AI research.
  • The inevitability of AI progress suggests a need for more efficient foundational structures.
  • The development of new machine learning paradigms could lead to significant advancements in AI.
  • The limitations of current AI models highlight the challenges in handling complex, non-verifiable tasks.
  • Structured training environments have a transformative impact on AI performance, particularly in programming tasks.
  • The efficiency of human-like learning is a fundamental requirement for achieving AGI.

Guest intro

François Chollet is the founder of a startup focused on developing AGI through program synthesis, which he co-founded with Zapier co-founder Mike Knoop after leaving Google in November 2024. He created the Keras deep-learning library in 2015 and published the ARC-AGI benchmark in 2019 to measure AI systems’ ability to solve novel reasoning problems. In 2024, he launched the ARC Prize, a $1 million competition to advance progress toward artificial general intelligence.

Why AGI progress is inevitable

  • AGI progress is expected to continue accelerating, with significant developments anticipated around 2030.
  • — François Chollet

  • The inevitability of AI progress suggests that stopping it is unlikely.
  • — François Chollet

  • Understanding the timeline for advancements in AGI is crucial for AI development.
  • The prediction about the future of AGI indicates the inevitability of AI progress.
  • The development of new machine learning paradigms could lead to significant advancements in AI.
  • The efficiency of human-like learning is a fundamental requirement for achieving AGI.
  • — François Chollet

The new frontier in machine learning at NDA

  • The goal of the new AGI research lab, NDA, is to create a new branch of machine learning that is fundamentally different from deep learning.
  • — François Chollet

  • Knowledge of current machine learning paradigms and the limitations of deep learning is crucial to appreciate this new approach.
  • This novel approach in AI research could lead to significant advancements in the field.
  • Understanding the limitations of current deep learning approaches is essential for recognizing the potential benefits of symbolic models.
  • — François Chollet

  • Symbolic models can provide more efficient and generalizable machine learning solutions compared to traditional parametric models.
  • The development of new machine learning paradigms at NDA could reshape the future of AI research.

The shift towards symbolic models

  • Symbolic models can provide more efficient and generalizable machine learning solutions compared to traditional parametric models.
  • — François Chollet

  • The potential benefits of symbolic models include improved efficiency and generalization.
  • — François Chollet

  • Understanding the limitations of current deep learning approaches is crucial for recognizing the advantages of symbolic models.
  • This novel approach to machine learning could significantly improve efficiency and generalization.
  • The shift towards symbolic models represents a move towards more optimal machine learning solutions.
  • The development of symbolic models could lead to significant advancements in AI technology.

The future of AI and machine learning

  • Machine learning and AI will evolve towards optimality, moving away from current stacks.
  • — François Chollet

  • The inevitability of AI progress suggests a need for more efficient foundational structures.
  • — François Chollet

  • Understanding the current limitations of AI technology is crucial for anticipating future advancements.
  • The prediction about the future direction of AI technology highlights the need for more efficient foundational structures.
  • The development of new machine learning paradigms could lead to significant advancements in AI.
  • The shift towards optimality represents a move towards more efficient and effective AI solutions.

The success of coding agents

  • Coding agents succeed because code offers a verifiable reward signal, enabling automation in formally verifiable domains.
  • — François Chollet

  • Understanding how reward signals function in machine learning is crucial for recognizing the success of coding agents.
  • The verifiability of code enables automation in formal domains, such as mathematics.
  • This explanation clarifies the mechanics behind the success of coding agents.
  • The success of coding agents suggests broader implications for other domains like mathematics.
  • The development of coding agents represents a significant advancement in AI technology.
  • The verifiable reward signals offered by code are crucial for the success of coding agents.

Challenges in non-verifiable domains

  • The progress of reasoning models in non-verifiable domains like essay writing will be slow due to reliance on costly human-annotated data.
  • — François Chollet

  • Understanding the challenges of applying AI to creative tasks like essay writing is crucial for recognizing the limitations of current AI models.
  • The reliance on costly human-annotated data is a significant barrier to progress in non-verifiable domains.
  • This insight highlights the limitations of current AI models in handling complex, non-verifiable tasks.
  • The challenges in non-verifiable domains underscore the need for more efficient AI models.
  • The slow progress in non-verifiable domains suggests a need for new approaches in AI research.
  • The limitations of current AI models highlight the challenges in handling complex, non-verifiable tasks.

Advancements in code-based training environments

  • The creation of code-based training environments has significantly advanced AI capabilities in programming.
  • — François Chollet

  • Understanding how AI models are trained is crucial for recognizing the importance of structured environments for effective learning.
  • The development of code-based training environments represents a significant advancement in AI technology.
  • Structured training environments have a transformative impact on AI performance, particularly in programming tasks.
  • The success of code-based training environments suggests broader implications for other domains.
  • The creation of code-based training environments underscores the importance of verifiable reward signals in AI training.
  • The advancements in code-based training environments highlight the potential for further improvements in AI capabilities.

The trajectory towards automation

  • We are on a trajectory to automate economically useful work before achieving true AGI.
  • — François Chollet

  • Understanding the distinction between automation and AGI is crucial for recognizing the current advancements in AI.
  • The prediction about the trajectory towards automation highlights the potential for significant advancements in AI technology.
  • The development of automation technologies represents a significant step towards achieving AGI.
  • The trajectory towards automation suggests a need for more efficient AI models.
  • The current advancements in AI automation set expectations for future developments.
  • The potential for full automation in verifiable domains underscores the importance of verifiable reward signals in AI training.

The inefficiency of building AGI on current LLMs

  • Building AGI on top of current LLMs would be inefficient and not optimal for future AI research.
  • — François Chollet

  • Understanding the limitations of current LLM technology is crucial for recognizing the inefficiency of building AGI on top of them.
  • This opinion provides a critical perspective on the direction of AI research and the need for more optimal approaches.
  • The inefficiency of building AGI on current LLMs suggests a need for new approaches in AI research.
  • The development of more efficient AI models represents a significant step towards achieving AGI.
  • The need for optimality in AI research underscores the importance of efficiency in future AI developments.
  • The limitations of current LLM technology highlight the challenges in building AGI on top of them.
Disclosure: This article was edited by Editorial Team. For more information on how we create and review content, see our Editorial Policy.

New AGI lab aims to revolutionize machine learning with symbolic models, moving beyond traditional deep learning.

Key Takeaways

  • AGI progress is expected to accelerate, with significant developments anticipated around 2030.
  • The new AGI research lab, NDA, aims to create a fundamentally different branch of machine learning from deep learning.
  • Symbolic models could provide more efficient and generalizable solutions compared to traditional parametric models.
  • AI and machine learning are expected to evolve towards optimality, moving away from current technological stacks.
  • Coding agents succeed due to the verifiable reward signals offered by code, enabling automation in formal domains.
  • The progress of reasoning models in non-verifiable domains like essay writing will be slow due to reliance on costly human-annotated data.
  • Code-based training environments have significantly advanced AI capabilities in programming.
  • AGI requires a model that can efficiently learn and adapt to new tasks with minimal data, similar to human learning.
  • We are on a trajectory to automate economically useful work before achieving true AGI.
  • Building AGI on top of current LLMs is seen as inefficient and not optimal for future AI research.
  • The inevitability of AI progress suggests a need for more efficient foundational structures.
  • The development of new machine learning paradigms could lead to significant advancements in AI.
  • The limitations of current AI models highlight the challenges in handling complex, non-verifiable tasks.
  • Structured training environments have a transformative impact on AI performance, particularly in programming tasks.
  • The efficiency of human-like learning is a fundamental requirement for achieving AGI.

Guest intro

François Chollet is the founder of a startup focused on developing AGI through program synthesis, which he co-founded with Zapier co-founder Mike Knoop after leaving Google in November 2024. He created the Keras deep-learning library in 2015 and published the ARC-AGI benchmark in 2019 to measure AI systems’ ability to solve novel reasoning problems. In 2024, he launched the ARC Prize, a $1 million competition to advance progress toward artificial general intelligence.

Why AGI progress is inevitable

  • AGI progress is expected to continue accelerating, with significant developments anticipated around 2030.
  • — François Chollet

  • The inevitability of AI progress suggests that stopping it is unlikely.
  • — François Chollet

  • Understanding the timeline for advancements in AGI is crucial for AI development.
  • The prediction about the future of AGI indicates the inevitability of AI progress.
  • The development of new machine learning paradigms could lead to significant advancements in AI.
  • The efficiency of human-like learning is a fundamental requirement for achieving AGI.
  • — François Chollet

The new frontier in machine learning at NDA

  • The goal of the new AGI research lab, NDA, is to create a new branch of machine learning that is fundamentally different from deep learning.
  • — François Chollet

  • Knowledge of current machine learning paradigms and the limitations of deep learning is crucial to appreciate this new approach.
  • This novel approach in AI research could lead to significant advancements in the field.
  • Understanding the limitations of current deep learning approaches is essential for recognizing the potential benefits of symbolic models.
  • — François Chollet

  • Symbolic models can provide more efficient and generalizable machine learning solutions compared to traditional parametric models.
  • The development of new machine learning paradigms at NDA could reshape the future of AI research.

The shift towards symbolic models

  • Symbolic models can provide more efficient and generalizable machine learning solutions compared to traditional parametric models.
  • — François Chollet

  • The potential benefits of symbolic models include improved efficiency and generalization.
  • — François Chollet

  • Understanding the limitations of current deep learning approaches is crucial for recognizing the advantages of symbolic models.
  • This novel approach to machine learning could significantly improve efficiency and generalization.
  • The shift towards symbolic models represents a move towards more optimal machine learning solutions.
  • The development of symbolic models could lead to significant advancements in AI technology.

The future of AI and machine learning

  • Machine learning and AI will evolve towards optimality, moving away from current stacks.
  • — François Chollet

  • The inevitability of AI progress suggests a need for more efficient foundational structures.
  • — François Chollet

  • Understanding the current limitations of AI technology is crucial for anticipating future advancements.
  • The prediction about the future direction of AI technology highlights the need for more efficient foundational structures.
  • The development of new machine learning paradigms could lead to significant advancements in AI.
  • The shift towards optimality represents a move towards more efficient and effective AI solutions.

The success of coding agents

  • Coding agents succeed because code offers a verifiable reward signal, enabling automation in formally verifiable domains.
  • — François Chollet

  • Understanding how reward signals function in machine learning is crucial for recognizing the success of coding agents.
  • The verifiability of code enables automation in formal domains, such as mathematics.
  • This explanation clarifies the mechanics behind the success of coding agents.
  • The success of coding agents suggests broader implications for other domains like mathematics.
  • The development of coding agents represents a significant advancement in AI technology.
  • The verifiable reward signals offered by code are crucial for the success of coding agents.

Challenges in non-verifiable domains

  • The progress of reasoning models in non-verifiable domains like essay writing will be slow due to reliance on costly human-annotated data.
  • — François Chollet

  • Understanding the challenges of applying AI to creative tasks like essay writing is crucial for recognizing the limitations of current AI models.
  • The reliance on costly human-annotated data is a significant barrier to progress in non-verifiable domains.
  • This insight highlights the limitations of current AI models in handling complex, non-verifiable tasks.
  • The challenges in non-verifiable domains underscore the need for more efficient AI models.
  • The slow progress in non-verifiable domains suggests a need for new approaches in AI research.
  • The limitations of current AI models highlight the challenges in handling complex, non-verifiable tasks.

Advancements in code-based training environments

  • The creation of code-based training environments has significantly advanced AI capabilities in programming.
  • — François Chollet

  • Understanding how AI models are trained is crucial for recognizing the importance of structured environments for effective learning.
  • The development of code-based training environments represents a significant advancement in AI technology.
  • Structured training environments have a transformative impact on AI performance, particularly in programming tasks.
  • The success of code-based training environments suggests broader implications for other domains.
  • The creation of code-based training environments underscores the importance of verifiable reward signals in AI training.
  • The advancements in code-based training environments highlight the potential for further improvements in AI capabilities.

The trajectory towards automation

  • We are on a trajectory to automate economically useful work before achieving true AGI.
  • — François Chollet

  • Understanding the distinction between automation and AGI is crucial for recognizing the current advancements in AI.
  • The prediction about the trajectory towards automation highlights the potential for significant advancements in AI technology.
  • The development of automation technologies represents a significant step towards achieving AGI.
  • The trajectory towards automation suggests a need for more efficient AI models.
  • The current advancements in AI automation set expectations for future developments.
  • The potential for full automation in verifiable domains underscores the importance of verifiable reward signals in AI training.

The inefficiency of building AGI on current LLMs

  • Building AGI on top of current LLMs would be inefficient and not optimal for future AI research.
  • — François Chollet

  • Understanding the limitations of current LLM technology is crucial for recognizing the inefficiency of building AGI on top of them.
  • This opinion provides a critical perspective on the direction of AI research and the need for more optimal approaches.
  • The inefficiency of building AGI on current LLMs suggests a need for new approaches in AI research.
  • The development of more efficient AI models represents a significant step towards achieving AGI.
  • The need for optimality in AI research underscores the importance of efficiency in future AI developments.
  • The limitations of current LLM technology highlight the challenges in building AGI on top of them.
Disclosure: This article was edited by Editorial Team. For more information on how we create and review content, see our Editorial Policy.

Loading more articles…

You’ve reached the end


Add us on Google

`;
}

function createMobileArticle(article) {
const displayDate = getDisplayDate(article);
const editorSlug = article.editor ? article.editor.toLowerCase().replace(/\s+/g, ‘-‘) : ”;
const captionHtml = article.imageCaption ? `

${article.imageCaption}

` : ”;
const authorHtml = article.isPressRelease ? ” : `
`;

return `


${captionHtml}

${article.subheadline ? `

${article.subheadline}

` : ”}

${createSocialShare()}

${authorHtml}
${displayDate}

${article.content}

${article.isPressRelease ? ” : article.isSponsored ? `

Disclosure: This is sponsored content. It does not represent Crypto Briefing’s editorial views. For more information, see our Editorial Policy.

` : `

Disclosure: This article was edited by ${article.editor}. For more information on how we create and review content, see our Editorial Policy.

`}

`;
}

function createDesktopArticle(article, sidebarAdHtml) {
const editorSlug = article.editor ? article.editor.toLowerCase().replace(/\s+/g, ‘-‘) : ”;
const displayDate = getDisplayDate(article);
const captionHtml = article.imageCaption ? `

${article.imageCaption}

` : ”;
const categoriesHtml = article.categories.map((cat, i) => {
const separator = i < article.categories.length – 1 ? ‘|‘ : ”;
return `${cat}${separator}`;
}).join(”);
const desktopAuthorHtml = article.isPressRelease ? ” : `
`;

return `

${categoriesHtml}

${article.subheadline ? `

${article.subheadline}

` : ”}

${desktopAuthorHtml}
${displayDate}
${createSocialShare()}

${captionHtml}

${article.content}
${article.isPressRelease ? ” : article.isSponsored ? `
Disclosure: This is sponsored content. It does not represent Crypto Briefing’s editorial views. For more information, see our Editorial Policy.

` : `

Disclosure: This article was edited by ${article.editor}. For more information on how we create and review content, see our Editorial Policy.

`}

`;
}

function loadMoreArticles() {
if (isLoading || !hasMore) return;

isLoading = true;
loadingText.classList.remove(‘hidden’);

// Build form data for AJAX request
const formData = new FormData();
formData.append(‘action’, ‘cb_lovable_load_more’);
formData.append(‘current_post_id’, lastLoadedPostId);
formData.append(‘primary_cat_id’, primaryCatId);
formData.append(‘before_date’, lastLoadedDate);
formData.append(‘loaded_ids’, loadedPostIds.join(‘,’));

fetch(ajaxUrl, {
method: ‘POST’,
body: formData
})
.then(response => response.json())
.then(data => {
isLoading = false;
loadingText.classList.add(‘hidden’);

if (data.success && data.has_more && data.article) {
const article = data.article;
const sidebarAdHtml = data.sidebar_ad_html || ”;

// Check for duplicates
if (loadedPostIds.includes(article.id)) {
console.log(‘Duplicate article detected, skipping:’, article.id);
// Update pagination vars and try again
lastLoadedDate = article.publishDate;
loadMoreArticles();
return;
}

// Add to mobile container
mobileContainer.insertAdjacentHTML(‘beforeend’, createMobileArticle(article));

// Add to desktop container with fresh ad HTML
desktopContainer.insertAdjacentHTML(‘beforeend’, createDesktopArticle(article, sidebarAdHtml));

// Update tracking variables
loadedPostIds.push(article.id);
lastLoadedPostId = article.id;
lastLoadedDate = article.publishDate;

// Execute any inline scripts in the new content (for ads)
const newArticle = desktopContainer.querySelector(`article[data-article-id=”${article.id}”]`);
if (newArticle) {
const scripts = newArticle.querySelectorAll(‘script’);
scripts.forEach(script => {
const newScript = document.createElement(‘script’);
if (script.src) {
newScript.src = script.src;
} else {
newScript.textContent = script.textContent;
}
document.body.appendChild(newScript);
});
}

// Trigger Ad Inserter if available
if (typeof ai_check_and_insert_block === ‘function’) {
ai_check_and_insert_block();
}

// Trigger Google Publisher Tag refresh if available
if (typeof googletag !== ‘undefined’ && googletag.pubads) {
googletag.cmd.push(function() {
googletag.pubads().refresh();
});
}

} else if (data.success && !data.has_more) {
hasMore = false;
endText.classList.remove(‘hidden’);
} else if (!data.success) {
console.error(‘AJAX error:’, data.error);
hasMore = false;
endText.textContent=”Error loading more articles”;
endText.classList.remove(‘hidden’);
}
})
.catch(error => {
console.error(‘Fetch error:’, error);
isLoading = false;
loadingText.classList.add(‘hidden’);
hasMore = false;
endText.textContent=”Error loading more articles”;
endText.classList.remove(‘hidden’);
});
}

// Set up IntersectionObserver
const observer = new IntersectionObserver(function(entries) {
if (entries[0].isIntersecting) {
loadMoreArticles();
}
}, { threshold: 0.1 });

observer.observe(loadingTrigger);
})();

© Decentral Media and Crypto Briefing® 2026.

Source: https://cryptobriefing.com/francois-chollet-agi-progress-is-accelerating-towards-2030-symbolic-models-will-reshape-machine-learning-and-coding-agents-are-revolutionizing-automation-y-combinator-startup-podcast/

Market Opportunity
Delysium Logo
Delysium Price(AGI)
$0.0114
$0.0114$0.0114
-1.04%
USD
Delysium (AGI) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact [email protected] for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

USD1 Genesis: 0 Fees + 12% APR

USD1 Genesis: 0 Fees + 12% APRUSD1 Genesis: 0 Fees + 12% APR

New users: stake for up to 600% APR. Limited time!