Even validated Machine Learning models are often vulnerable to adversarial attacks, can leak private data and can be biased in their predictions. With our machine learning validation and testing services, we measure the vulnerabilities in your machine learning models and propose remedies.
Safety and reliability of machine learning are gaining importance as ML moves into real-world use cases. In addition, with the upcoming regulations on AI, ML models need to be tested and validated when used in a range of critical applications.
Based on our research and practical knowledge, our experts can help you discover the vulnerabilities in your ML models and suggest possible remedies to make your models more robust and trustworthy. The methods used range from data augmentation to adversarial attacks to fine-tuned evaluation pipelines that address aspects of bias and model overfitting.
The validation design is performed by an experienced ML researcher from KIT. KIT is "The Research University in the Helmholtz Association." As one of the largest scientific institutions in Europe, Germany's only university of excellence with national large-scale research facilities combines a long university tradition with program-oriented cutting-edge research.
Since KIT also focuses on innovation and technology transfer, our experts have many years of experience from applied industrial projects.
SPECIAL ACCESS CONDITIONS
Conditions and requirements for participation in an experiment within the Open Calls:
By participating in an EUHubs4Data Open Call, you are initially only applying for funding that originally comes from the European Commission and is awarded by the coordinator exclusively in its own name under the conclusion of a sub-grant agreement. This sub-grant agreement does not establish a contract with KIT, neither through your application nor through a possible positive funding decision.
KIT will therefore - also in your own interest - conclude a separate written agreement with you at the start of the experiment (based on our sample cooperation agreement. If you decide to propose the participation of KIT and SDIL infrastructure in your experiment, you must respect the following conditions. We provide this information in advance to ensure maximum transparency: please contact us if you have any questions. In the unlikely event that you are unable to conduct your experiment with our participation, we will attempt to assist you in selecting alternative services before the experiment begins.
Please note that contrary to the name "service", the above description is not a genuine commercial offer, but a listing of exclusive contributions as part of a genuine eye-to-eye collaboration.
For genuine commercial offerings related to the above topics, please feel free to contact us any time outside of the Open Calls.
Existing ML model with training data (e.g. from a previous task within an experiment)
In order to escape the PoC purgatory of your predictive maintenance and servicing application you may need to ensure more that just accuracy to prove the "robustness" of your algorithms also in cases that are not contained in your data set. We show you approaches to increase the trustworthiness and overall safety of your ML models.