This commitment is reflected both in the draft governance of the proposed federation and in the active ethical monitoring within the project. The ethics monitoring group is working with the coordinator, partners and an external ethics consultant to establish a practical ethical framework that will cover future challenges. For example, the EUHubs4Data project is committed to the general principles of trustworthy AI and seeks to incorporate and develop best practices in ethics.
Moreover, as a Horizon 2020 project, it is contractually bound by funding rules, legal frameworks such as the General Data Protection Regulation, and national laws. It is therefore very important that funded experiments can provide all required documentation, especially when working with personal and anonymized data, but also when dealing with other ethical and legal issues.
To support SMEs in this process, we have compiled frequently asked questions raised during our active ethics monitoring of the project.
Which are the main ethical challenges in data ethics?
There are no specific main challenges in data ethics, but main challenges are identified to be part of these main four areas: privacy, lack of transparency and consent and power.
Which process do I need to follow to be compliant with ethical requirements for my experiment.
The process for experiments is common but the activities that take place in each phase of the process will depend on the potential ethical concerns of each experiment. Prior to the eligibility of the experiment a self-assessment needs to be completed and agree to deliver or provide the indicated documentation if the experiment is selected. Failing to meet the ethical requirements can delay the funding provision.
Once I provide the requested documentation, I am clear with ethics?
Ethics concerns may arise at any moment during the experiment; thus, active ethic monitoring is required. If any ethical issue arises, it needs to be properly dealt with and the experiment will proceed. Documentation will be updated for term reviews with the findings.
What is a DPO?
Data protection officers (DPOs) are data protection experts who are responsible for Monitoring an organisation’s data protection compliance; informing and advising on its data protection obligations; providing advice on DPIAs (data protection impact assessments) and monitoring their performance; and acting as a contact point for data subjects and the relevant supervisory authority.
What is a DPIA (data protection impact assessment)?
A DPIA is a type of risk assessment. It helps you identify and minimise risks relating to personal data processing activities. DPIAs are also sometimes known as PIAs (privacy impact assessments). The EU GDPR (General Data Protection Regulation) and DPA (Data Protection Act) 2018 require you to carry out a DPIA before certain types of processing. This ensures that you can mitigate data protection risks. Companies should also conduct one when introducing new data processing processes, systems or technologies. In the following link, applicants can find an open source PIA software helps to carry out data protection impact assessment (https://www.cnil.fr/en/open-source-pia-software-helps-carry-out-data-protection-impact-assesment)
What is a data controller?
A data controller is a person or organisation who controls the purpose of and means by which personal data is processed. So, if you (as an individual) collect and store personal data, you are a data controller. If your business collects and stores personal data, your business is a data controller. Data controllers are tasked with ensuring that personal information is being processed lawfully and in accordance with the GDPR’s data protection requirements.
Is a data controller different from a data protection officer?
The term “data controller” is used to broadly identify people and organisations who must comply with the GDPR, whereas a data protection officer (DPO) is the formal job title given to someone who has been specifically hired to carry out these obligations.
What are the responsibilities of Data controllers?
Responsibility of data controllers are defined in the General Data Protection Regulation in article 24, https://gdpr-info.eu/art-24-gdpr/
Which laws and regulations might be applicable to our experiment?
It is important to consider which laws and regulations might be relevant, what these laws are designed to protect or accomplish, and what the impact may be of not taking them into account. This includes considering regulations such as GDPR (the European General Data Protection Regulation) and any local or regional regulation.
How are we achieving ethical accountability?
It should be clear who will be accountable to minimize the harm that could be done by the experiment. Accountability includes ensuring the experiment team proactively identifies potential stakeholders and evaluates harms such as possible disproportionate effects that may arise from the application of a model.
How might the legal rights of an individual be impinged by our use of data?
For the experiment to be ethical, the organization must have the right to use the data for their specific purpose. For example, privacy issues should not only focus on who owns the collected data, but also the rights that need to apply to downstream users that data.
How do we know that the data is ethically available for its intended use?
Being able to access and collect data does not mean that it is ethical to use that data. Hence, care must be taken to understand who owns the data, what are their rights and expectations, and is the data being used the way that it the person (or entity) that contributed the data intended?
How do we know that the data is valid for its intended use?
The SME should ensure that the data that is used for the experiment is suitable for the intended. One aspect of data validity is data accuracy. For example, imputing missing values or excluding records with missing values could have a significant impact on the downstream analytical results (which might amplify bias). Another data validity concern is related to ‘fitness of purpose’ with respect to how specific data will be used.
How have we identified and minimized any bias in the data or in the model?
Data science machine learning models can be built using data that has a bias, and thus, the model might also learn this bias (for example, the use of machine learning algorithms has shown the capability of inheriting racial and gender biases). In other words, bias might come from the fact that the data used to build the model was biased. Thus, the data science team needs to be aware that the choices with respect to training data might have profound impacts on others.
How transparent does the model need to be and how is that transparency achieved?
An explanation in understandable terms as to why a specific decision is recommended often cannot be supplied — even by the team that builds the model. This makes explainability and comprehensibility very difficult. Thus, many models are effectively a black box. Model transparency is particularly important when model output might disadvantage a certain subgroup (or appear to disadvantage a specific subgroup), or in situations where there is a high degree of regulation or a right of challenge (e.g, lending money).
What are likely misinterpretations of the results and what can be done to prevent those misinterpretations?
Most predictive models are statistical in nature. They provide no guarantees; rather, they tell us about areas where an increased probability of an outcome might guide us to act differently. With this in mind, the data science project manager should ensure that the analytical decisions are made as a result of a data science project reflects the scale, accuracy and precision of the data that was used in creating the model.
What is the ethics self-assessment that is required for participating in Open Call?
Your Experiment will be part of a Horizon2020 project; therefore, rules apply. Further ethics is an important pilar of the planed federation. Based on the Code of Ethics, the Ethics Self-Assessment is intended to assist you in thinking about your ethics-related documentation, leadership and actions. It should be kept and facilitated to the EUHubs4Data experiment responsible. It must not be used as a tool for evaluating the ethical behavior of others.
How should I procced if I have any doubt answering the self-assessment questions?
As a starting point the ‘Ethics Issues Table’ is used to identify potential ethics issues. If in doubt rather answer ‘yes’ and provide the documentation requested, as this will strengthen your proposal. Particularly do not hesitate to mention risks related to the used data sets (risk of deanonymization. If you decide to answer ‘no’ when this might not be clear to an external auditor/reviewer, you might also want to explain why this issue does not apply to your proposal as part of the ethics self-assessment.
Some of the ethics issues listed in the ethics self-assessment apply to my proposal. What do I need to do?
In the Ethics Issues Table of the proposal, select ‘yes’ for all issues that potentially apply to your proposal.
If your experiment is selected, you will need to address these issues prior to grant signature. Follow the detailed instructions in the user guide “How to complete your ethics self-assessment“.
The data I am using it is said it is anonymized. Do I need to apply GDPR procedures?
The responsibility of the proper use of the data relies on the user. The user needs to ensure that the anonymization methods used are compliant with the future use of the data. Risk assessment or DPIA may be needed to evaluate the risks and the methods needed.
Who is responsible if any ethical issue arises?
The final responsible of an ethical issue is the performer of the experiment. The EUHubs4Data is willing to help to prevent any ethical concern you can refer to email@example.com for further information.
Do I solve Ethical Issues being GDPR compliant?
GDPR focuses on one very important element of data ethics: protecting people’s privacy. It also touches upon some other elements of data ethics, such as accountability and transparency but Experiment Ethics has a wider spectrum than only GDPR. Discrimination, profiling, ownership, transaction transparency, consent, currency openness can also rise ethical issues. For further information https://ec.europa.eu/info/funding-tenders/opportunities/docs/2021-2027/horizon/guidance/ethics-and-data-protection_he_en.pdf
Where Can I find more information on Ethics and Data protection?
In the following link (https://edpb.europa.eu/about-edpb/about-edpb/members_en), you can find the European national data protection authorities. The usually provide guidelines in how to fulfill requirements. Also, in case your application works with AI models or algorithms, AI assessment can be found here (https://futurium.ec.europa.eu/en/european-ai-alliance/pages/altai-assessment-list-trustworthy-artificial-intelligence) so proper consideration can be taken into account. Ethics guidelines for trustworthy AI can be found here (https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai)