Project Summary
A fund of 5M€ is awarded to SPATIAL project coordinated by TBM Associate Professor Dr. Aaron Ding. SPATIAL project focuses on two specific AI challenges: the model training of the AI (prio to its use) for potential bias, privacy and model poisoning, and the black box working of the algorithm once it can be used. The project contributes to a trustworthy framework for AI-driven security by developing and designing resilient accountable metrics, privacy-preserving methods, verification tools and system solutions for a more trustworthy AI. It paves the way towards trustworthy AI in security solutions by generating appropriate skills and education for trustworthy AI in cybersecurity from both societal and technical aspects.
Project description
SPATIAL consortium is directed by Dr. Aaron Ding (TU Delft, Netherlands) and formed by 12 partners across 8 EU member states (Netherlands, Germany, Spain, Finland, France, Ireland, Serbia and Estonia). The consortium combines long-standing research expertise in AI, cybersecurity, IoT, edge computing, and lab-to-market know-how.
SPATIAL project will tackle the identified gaps on data and black-box AI by designing and developing resilient accountable metrics, privacy-preserving methods, verification tools and system solutions that will serve as critical building blocks for trustworthy AI in ICT systems and cybersecurity.
Besides the technical ambition, SPATIAL will facilitate appropriate skill development and education for AI security to strike a balance among technological complexity, societal complexity, and value conflicts in AI deployment. The project covers data privacy, resilience engineering, and legal-ethical accountability, in line with Europe's priority to achieve trustworthy AI. The work carried out on both social and technical aspects will serve as a stepping stone to establish an appropriate governance and regulatory framework for AI-driven security in Europe.
Black box AI refers to AI systems that receive input and produce output without the end-user understanding. As inputs and outputs cannot be easily seen or understood, it can lead to issues within and across organisations. The EU-funded SPATIAL project will address the challenges of black box AI and data management in cybersecurity. To do this, it will design and develop resilient accountable metrics, privacy-preserving methods, verification tools and system framework to pave the way for trustworthy AI in security solutions. In addition to this, the project aims to help generate appropriate skills and education for trustworthy AI in cybersecurity on both societal and technical aspects.
What's Next?
Based on the findings of SPATIAL, there will still be much research needed. The results of the project can be considered a stepping-stone to achieve trustworthy transparent and explainable AI for cybersecurity solutions. The results can also contribute to the governance and regulative framework, setting an international standard for AI driven cyber security to be more secure in the future.