Author Center

News

Papers

Outreach Index

Updates

White Paper

By Editorial Staff

This foundational document of the th+initiative sets out the technical principles of Neutral Algorithmic Governance: an AI-assisted management framework designed to optimize critical infrastructures, eradicate logistical failures, and ensure systemic resilience above and beyond political and administrative variability.

wp03 - Nov 28, 2025, 4:51 PM Arg, UTC−3 / Updated 28,11, 2025

Institutional Declaration

By Editorial Staff

BUENOS AIRES, Nov 28 (th+initiative) - The analysis published by Pangram Labs on 26 March 2025 —widely covered by Nature and other leading scientific outlets— has conclusively shown that up to 21 % of peer reviews submitted to the 2026 International Conference on Learning Representations (ICLR) were entirely generated by large language models (LLMs), with more than 50 % exhibiting significant AI intervention. This evidence represents the strongest demonstration to date of systemic contamination of the traditional peer-review process at elite venues.
The analysis published by Pangram Labs on 26 March 2025 —widely covered by Nature and other leading scientific outlets— has conclusively shown that up to 21 % of peer reviews submitted to the 2026 International Conference on Learning Representations (ICLR) were entirely generated by large language models (LLMs), with more than 50 % exhibiting significant AI intervention. This evidence represents the strongest demonstration to date of systemic contamination of the traditional peer-review process at elite venues.

wp03 - Nov 28, 2025, 4:51 PM Arg, UTC−3 / Updated 28,11, 2025

Intelligence Brief

By Editorial Staff

This foundational document of the th+initiative sets out the technical principles of Neutral Algorithmic Governance: an AI-assisted management framework designed to optimize critical infrastructures, eradicate logistical failures, and ensure systemic resilience above and beyond political and administrative variability.

ib03 - Nov 28, 2025, 4:51 PM Arg, UTC−3 / Updated 28,11, 2025

White Paper

By Editorial Staff

This foundational document of the th+initiative sets out the technical principles of Neutral Algorithmic Governance: an AI-assisted management framework designed to optimize critical infrastructures, eradicate logistical failures, and ensure systemic resilience above and beyond political and administrative variability.

wp03 - Nov 28, 2025, 4:51 PM Arg, UTC−3 / Updated 28,11, 2025

White Paper

By Editorial Staff

November 28, 2025, 4:51 PM Argentina, UTC−3 / Updated 28,11, 2025

BUENOS AIRES, Nov 28 (th+initiative) - The analysis published by Pangram Labs on 26 March 2025 —widely covered by Nature and other leading scientific outlets— has conclusively shown that up to 21 % of peer reviews submitted to the 2026 International Conference on Learning Representations (ICLR) were entirely generated by large language models (LLMs), with more than 50 % exhibiting significant AI intervention. This evidence represents the strongest demonstration to date of systemic contamination of the traditional peer-review process at elite venues.

Intelligence Brief

By Editorial Staff

November 28, 2025, 4:51 PM Argentina, UTC−3 / Updated 28,11, 2025

BUENOS AIRES, Nov 28 (th+initiative) - The analysis published by Pangram Labs on 26 March 2025 —widely covered by Nature and other leading scientific outlets— has conclusively shown that up to 21 % of peer reviews submitted to the 2026 International Conference on Learning Representations (ICLR) were entirely generated by large language models (LLMs), with more than 50 % exhibiting significant AI intervention. This evidence represents the strongest demonstration to date of systemic contamination of the traditional peer-review process at elite venues.

Operational Guide

By Editorial Staff

November 28, 2025, 4:51 PM Argentina, UTC−3 / Updated 28,11, 2025

BUENOS AIRES, Nov 28 (th+initiative) - The analysis published by Pangram Labs on 26 March 2025 —widely covered by Nature and other leading scientific outlets— has conclusively shown that up to 21 % of peer reviews submitted to the 2026 International Conference on Learning Representations (ICLR) were entirely generated by large language models (LLMs), with more than 50 % exhibiting significant AI intervention. This evidence represents the strongest demonstration to date of systemic contamination of the traditional peer-review process at elite venues.

Institutional Declaration

By Editorial Staff

November 28, 2025, 4:51 PM Argentina, UTC−3 / Updated 28,11, 2025

BUENOS AIRES, Nov 28 (th+initiative) - The analysis published by Pangram Labs on 26 March 2025 —widely covered by Nature and other leading scientific outlets— has conclusively shown that up to 21 % of peer reviews submitted to the 2026 International Conference on Learning Representations (ICLR) were entirely generated by large language models (LLMs), with more than 50 % exhibiting significant AI intervention. This evidence represents the strongest demonstration to date of systemic contamination of the traditional peer-review process at elite venues.

White Paper

By Editorial Staff

November 28, 2025, 4:51 PM Argentina, UTC−3 / Updated 28,11, 2025

BUENOS AIRES, Nov 28 (th+initiative) - The analysis published by Pangram Labs on 26 March 2025 —widely covered by Nature and other leading scientific outlets— has conclusively shown that up to 21 % of peer reviews submitted to the 2026 International Conference on Learning Representations (ICLR) were entirely generated by large language models (LLMs), with more than 50 % exhibiting significant AI intervention. This evidence represents the strongest demonstration to date of systemic contamination of the traditional peer-review process at elite venues.

Intelligence Brief

By Editorial Staff

November 28, 2025, 4:51 PM Argentina, UTC−3 / Updated 28,11, 2025

BUENOS AIRES, Nov 28 (th+initiative) - The analysis published by Pangram Labs on 26 March 2025 —widely covered by Nature and other leading scientific outlets— has conclusively shown that up to 21 % of peer reviews submitted to the 2026 International Conference on Learning Representations (ICLR) were entirely generated by large language models (LLMs), with more than 50 % exhibiting significant AI intervention. This evidence represents the strongest demonstration to date of systemic contamination of the traditional peer-review process at elite venues.

Operational Guide

By Editorial Staff

November 28, 2025, 4:51 PM Argentina, UTC−3 / Updated 28,11, 2025

BUENOS AIRES, Nov 28 (th+initiative) - The analysis published by Pangram Labs on 26 March 2025 —widely covered by Nature and other leading scientific outlets— has conclusively shown that up to 21 % of peer reviews submitted to the 2026 International Conference on Learning Representations (ICLR) were entirely generated by large language models (LLMs), with more than 50 % exhibiting significant AI intervention. This evidence represents the strongest demonstration to date of systemic contamination of the traditional peer-review process at elite venues.