
Research Director for the CNRS Toulouse France

atsec US CEO, Austin TX USA
Written by Yi Mao
Shanxi University in Taiyuan, China, recently hosted the CLAR 2025 (6th International Conference on Logic and Argumentation) and BLESS 2025 (2nd International Conference on Bridges Between Logic, Ethics, and Social Sciences) conferences, two landmark academic events. These twin conferences, though distinct in focus, converged on a shared mission: harnessing formal logic to tackle pressing challenges in AI, language models, ethics, and societal systems. Together, they showcased interdisciplinary research bridging computer science, philosophy, and social sciences, marking a pivotal moment in the dialogue between technical innovation and real-world impact.
The success of these conferences was made possible by the visionary leadership of Shanxi University’s School of Philosophy and their exceptional logic team, whose meticulous planning and warm hospitality ensured a welcoming environment for all invited speakers. We extend our deepest gratitude to Professors Yang You, Beihai Zhou, Chao Xu, Kai Li, and Yiyan Wang for their tireless dedication and outstanding contributions, which were instrumental in bringing these events to life.
CLAR 2025 was expertly co-chaired by Thomas Ågotnes (University of Bergen, Norway / Shanxi University, China) and Dragan Doder (Utrecht University, The Netherlands), while BLESS 2025 was led by Piotr Kulicki (John Paul II Catholic University of Lublin, Poland) and Tomasz Jarmuzek (Nicolaus Copernicus University in Torun, Poland). Their program committees curated an exceptional selection of presentations, fostering rich intellectual exchange.
The conferences brought together world-renowned scholars and emerging researchers, sparking dynamic discussions that highlighted the transformative power of logic in addressing contemporary challenges.
In his keynote at CLAR 2025, “Investigating Reasoning in Language Models,” Nicholas Asher critically examined whether large language models (LLMs) like GPT-4 and Llama 3 truly reason or merely simulate reasoning through statistical pattern-matching. His work proposed a groundbreaking framework to embed explicit logical inference layers within LLMs, enabling them to handle negation, transitivity, and other core aspects of human-like reasoning. By shifting the probability distribution from token sequences to sets of token sequences, Asher’s approach could fundamentally enhance LLMs’ deductive capabilities.
Asher’s 2011 book, Lexical Meaning in Context: A Web of Words, laid conceptual foundations for the Transformer architecture (introduced in Google’s 2017 “Attention Is All You Need”). His Segmented Discourse Representation Theory (SDRT), which models how meaning dynamically emerges from discourse structure, anticipated the self-attention mechanisms that underpin modern LLMs. While not a direct technical blueprint, Asher’s work bridged formal semantics and computational linguistics, inspiring the shift toward context-aware AI. His CLAR 2025 keynote extended this legacy, offering a roadmap to imbue LLMs with robust logical reasoning.

At Bless 2025, Dr. Yi Mao, CEO of atsec U.S., delivered a compelling talk titled “The Power of Logic: Verifying Systems, Securing AI”. As a longtime advocate for the cost-effective application of formal methods in Common Criteria Evaluations and FIPS validations, she demonstrated how logic-based verification – including tools like the Boyer-Moore theorem prover and Allen Emerson’s model checking – strengthens high-assurance security in critical systems.
In her talk, Dr. Mao dissected threats to AI systems, contrasting vulnerabilities in predictive AI (e.g., adversarial evasion attacks) and generative AI (e.g., prompt injection, deep-fake propagation). She warned that AI abuse such as automated malware or disinformation blurs the line between security (malicious threats) and safety (unintended harm), with dire consequences for sectors like healthcare and autonomous vehicles. Dr. Mao emphasized the interplay between AI and Information Assurance (IA) and proposed IA frameworks to secure AI through secure development standards, cryptography, and ethical governance (transparency, bias mitigation).
Dr. Mao’s presentation built industry-academia synergy via shared real-world case studies (e.g., CC and FIPS certifications) to ground theoretical discussions in practical security challenges, highlighted NIST’s Adversarial Machine Learning A Taxonomy and Terminology of Attacks and Mitigations as a tool for policymakers, and called for interdisciplinary collaboration to address AI’s societal risks, such as disinformation and safety hazards. Dr. Mao’s talk resonated with the conference’s mission to connect technical rigor with ethical/social responsibility. By framing AI security as a logic-ethics imperative, she provided actionable insights for researchers, policymakers, and industry leaders navigating AI’s societal impact.
atsec has a longstanding tradition of fostering academic excellence, particularly in interdisciplinary fields spanning logic, philosophy, linguistics, computer science, and AI. Demonstrating this commitment, in 2021, atsec established the Dr. Nicholas Asher Endowed Fellowship at the University of Texas at Austin, alongside scholarships at Peking University and Concordia University.
The recent twin conferences in Taiyuan further expanded atsec’s academic collaborations to include leading European institutions. We warmly invite talented logicians and researchers to explore the dynamic intersection of logic and information security with us.
