ETHICS POLICY

PREAMBLE
Our multidisciplinary ethics committee, comprising a jurist, ethicists, physicians, a machine learning developer and a neuroscience researcher, have formulated the Objectives and the Values of Aifred Health in the form of this Code of Ethics.

Aifred Health is proud to have signed and endorsed the principles of the Montréal Declaration Responsible AI.

Our objectives are based on our values that we will not compromise as we endeavour to build an excellent product and to serve our users: patients with mental illness and the clinicians working to help them.

Each of the following values are of equal importance.

    1. HUMANISM

      Using AI for social good.

      1. To use tools and methods from machine learning and data science to provide better insights, reduce wasted time, and increase speed, efficiency, level of accuracy, and productivity with respect to the care of individuals with mental health concerns and their families.

      2. To develop a responsible AI Application that addresses the lack of personalized mental health treatment guidance.

      3. To empower patients and clinicians in their shared decision-making efforts regarding patients’ mental health care.

      4. To work for the good of the individuals and our society.

    2. TRANSPARENCY

      Be transparent by enforcing a process of Meticulous Transparency when it comes to our research and product development, in order to provide:

      1. A Predictable Application: We will use standard model metrics that are familiar to clinicians and commit to extensive clinical testing of our products in order to produce an application whose performance and behavior is reliable.

      2. An Interpretable Tool: We will use the interpretability tools we have created in our custom-built (and open source) Vulcan machine learning framework to help clinicians and patients understand better the outputs of our AI models in a meaningful way.

      3. An open-source access to our analysis frameworks.

    3. PRUDENCE AND REPRESENTATIVENESS

      Avoid creating unfair biases - Machine learning tools can learn the bias of their creators and the datasets that they are made with, and as such we commit to:

      1. Use the highest quality of data available that is as representative as possible of the population we are hoping to help treat as we build our models.

      2. Commit to anticipate, identify and share the biases that cannot be avoided.

      3. Train the algorithm using the latest technological tools to minimize bias, and explore and visualize our data in order to better understand what it can tell us, and what it can’t, about different kinds of patients.

      4. Commit over the long term to create our own database that aligns with the latest standards regarding Data privacy and that contains Data that we will collect directly from the patients were are hoping to help, so that our tool can achieve the highest possible levels of clinical validity.

    4. PRIVACY

      Respect the dignity and rights of our users - patients, their families and health professionals - with:

      1. Integrity: Our primary goal is to improve the quality of mental health care of patients. We are working hard to respect users’ dignity and rights to confidentiality from the conception to the commercialisation of our product.

      2. Usefulness: We only collect data that is useful and necessary for the purposes of training and evaluating our algorithm and in order to allow for a better clinical use of our product.

      3. Freedom of Consent: Our terms and conditions were written by a Privacy expert to respect the recent changes to regulations (eg. GDPR). We have written an intelligible and easily accessible consent form. We make it as easy for our users to withdraw their consent as it is to offer it.​ All data used for model training is anonymized.

      4. Security: We take precautions to protect all our users’ information both online and offline, and to that effect we have established three Data privacy policies (see below).

      5. Responsiveness: We appointed a Data Protection Officer who is an expert in privacy law and available to answer all data issues or concerns within a reasonable delay.

    5. SECURITY

      Maintian a high level of Security:

      1. To be trusted by our users.

      2. To enhance privacy and respect the latest data privacy regulations.

      3. To meet health industry standards.

      4. To respect in our everyday operations the three data security policies that we created in order to maintain this high level of Security:

        1. Data Security Policy: Using encryption to protect sensitive information transmitted online.

        2. Data Security Policy: Employee Requirements to protect users’ information offline.

        3. Data Security Policy: Data leakage or loss Prevention- Data anonymization, breach policy.

    6. INTERDISCIPLINARY COLLABORATION AND SCIENTIFIC RIGOUR

      Involve diverse actors and perspectives in the design and development of our technology:

      1. A commitment to interdisciplinary work, providing a space for researchers, engineers, clinicians, jurists and ethicists to collaborate on the development of better mental health tools.

      2. A commitment to engaging end-users - clinicians, patients, and families - in our development process.

      3. To implement and design our product by integrating technical and ethical parameters.

      4. Commit over the long term to create our own database that aligns with the latest standards regarding Data privacy and that contains Data that we will collect directly from the patients were are hoping to help, so that our tool can achieve the highest possible levels of clinical validity.

      Encourage research in ethical, legal and social dimensions relevant to the development of our product:

      1. To anticipate and respond to new ethical, legal and social issues raised by advances in Artificial Intelligence.

      2. To provide our users with products that are safe and of high quality.

Date of last update : February 20 2019.