AutoML, Neural Architecture Search and automated feature generation are provided on a modern HPC and GPU infrastructure. Given a dataset and a machine learning architecture, we provide optimized configurations based on a predefined hyperparameter search space (model size, kernel, activation functions, etc.).
SERVICE DESCRIPTION
Based on the latest KIT research results, we can use modern HPC infrastructure to efficiently tune hyperparameters. We can optimize hyperparameters of formalized extraction pipelines (e.g. window sizes, feature subsets, etc.) as well as tune machine learning model architectures (AutoML and Neural Architecture Search). We work on existing models of sklearn, pytorch or Tensorflow/Keras.
Using data-efficient black-box optimization methods, we can optimize the performance of these models based on provided datasets. Instead of just learning model weights, we automatically explore the search space of possible hyper-parameters using, for example, semi-parallel Bayesian optimization methods on our HPC infrastructure. In doing so, we can also consider constraints so that you can likely expect the best-fitting model configuration for your regression, prediction, and/or classification task. We are particularly specialized in machine learning optimization on high frequency time series data.
The task will be supervised by an experienced ML researcher from KIT. KIT is "The Research University in the Helmholtz Association". As one of the largest scientific institutions in Europe, the only German university of excellence with national large-scale research facilities combines a long university tradition with program-oriented cutting-edge research. Since KIT also has a focus on innovation and technology transfer, our experts have many years of experience from applied industrial projects. The methods used are tried and tested.
SPECIAL ACCESS CONDITIONS
Conditions and requirements for participation in an experiment within the Open Calls:
By participating in an EUHubs4Data Open Call, you are initially only applying for funding that originally comes from the European Commission and is awarded by the coordinator exclusively in its own name under the conclusion of a sub-grant agreement. This sub-grant agreement does not establish a contract with KIT, neither through your application nor through a possible positive funding decision.
KIT will therefore - also in your own interest - conclude a separate written agreement with you at the start of the experiment (based on our sample cooperation agreement.
If you decide to propose the participation of KIT and SDIL infrastructure in your experiment, you must respect the following conditions.
We provide this information in advance to ensure maximum transparency: please contact us if you have any questions. In the unlikely event that you are unable to conduct your experiment with our participation, we will attempt to assist you in selecting alternative services before the experiment begins.
Please note that contrary to the name "service", the above description is not a genuine commercial offer, but a listing of exclusive contributions as part of a genuine eye-to-eye collaboration.
For genuine commercial offerings related to the above topics, please feel free to contact us any time outside of the Open Calls.
PREREQUISITES
Existing ML model with training data (e.g. from a previous task within an experiment).
CASE EXAMPLES
If you can provide us with a working ML model using e.g. gradient boosting with decent accuracy (or any other target metric), we may be able to provide you with an even better model by using extensive computing power automatically explore a given design space. You may never know, if maybe a an SVM with a periodic kernel may better fit your data. Similarly we can use other tools to optimize your Neural Network Architecture beyond learning their weights.
SERVICE CAN BE COMBINED WITH
Model tuning can be used as addon to any ML model delevopment services as long as model learning can be replicated on KIT's infrastructure. Despite infrastructure is typically included, any other PaaS or IaaS service that allows to use standard python-based ML frameworks and run dask as a parallization layer can be used if needed.