Research Team Develops AI Data Leak Prevention Guidelines

A person with a blue sweater and gray-blond hair looks into the camera. The person is standing in a hallway of a building. The floor is red and a glass door can be seen in the background.
Prof. David Blumenthal (Foto: Georg Pöhlein)FAU/Georg Pöhlein

A research team, including Prof. David Blumenthal from AIBE, has developed guidelines to prevent AI data leaks. These leaks occur when information is improperly transferred between training and test data, leading to unreliable results.

Prof. Blumenthal emphasized that while popular ML frameworks make workflows easier, they also increase the risk of incorrect applications. The team created seven key questions to guide the construction of ML models, ensuring robust and reproducible research.

Their findings will be published in Nature Methods on August 9, 2024.

More information

Prof. Dr. David B. Blumenthal
Biomedical Network Science (BIONETS)
david.b.blumenthal@fau.de