Advancements in artificial intelligence (AI) and machine learning (ML) have grown exponentially over the past years.
As the capacity to gather copious amounts of data has increased, the curation and development of highly targeted algorithms have also developed rapidly. Data is the lifeblood of AI and ML, and as their abilities to collect and translate data continues to evolve expeditiously, the ethical implications and concerns grow with it.
Biases and other grey elements can skew data and deviate algorithms to unethical results.
What are the ethical concerns and implications, and why must we pay attention?
Data, Algorithm And Results
Prevalent ethical unease for AI and ML.
These are the three major areas where deception or skewing of results can manifest.
Data: A solid ML system is based on abundant amounts of data. But what is the source of this data? If the data needed is not yet readily available, where will it be mined from? If it exists, is it free and open-sourced, and if it is not, is it ethical to buy this data (assuming it can be bought)? If the data is gated but could be used for the benefit of millions, should those gating it really have a right to privately own the data?
Another point for contemplation is the anonymous tracking of individuals without consent for the purpose of data collection. While this type of data gathering can identify suspicious and unusual behaviour, does this justify the invasion of private citizens’ lives?
Algorithm: Algorithms are typically open sourced and are free to use. However, the ethical conundrum arises if an algorithm is developed and owned by a private entity. If this private group has patented this algorithm, it is no longer legal to use the said algorithm without permission. What if this algorithm can help and be useful to a multitude of people? While this matter is more of an intellectual property issue, it is also a concern for ML.
Results: The problem with results is more specific towards ML itself as they need to be compared in order to report the accuracy of a model. Results are used vs. actual answers. The ethical interest is that as errors occur during the development of the models, the errors are reported on the same data set only. Errors, however, should be reported across all or multiple data sets in the model in order to achieve factual accuracy. If a model is tested on a specific dataset and re-tested using the same dataset over and over again, the AI and ML will memorise the set and will yield better and better results because it has memorised the errors and corrects them.
Transparency And Safety
Once the technological functions of an ML system are functioning adequately, will we be able to truly understand how it works and systematically gather data through it? Ethical analysis is highly dependent on gathering facts; only then can we begin to evaluate results.
It has been discovered that there is no way to analyse or understand the behaviour of some ML techniques such as deep learning in neural networks. Either there is no rational explanation that can be derived, or the explanation is too complex for humans to understand. The more powerful a tool or system is, the more transparent it should be. This is an ethical concern because the fact that AI and ML can be intrinsically opaque and beyond our understanding keeps us in the dark and puts us at risk.
Questions you need to ask.
Another ethical issue with ML and AI is technical safety. Will AI and ML systems work as intended? What happens if they fail? What happens if we become dependent on them and they fail? These questions should be first and foremost when developing any ML model.
Bias In Data, Training Sets, Etc
Risks and potential damage.
Neutral networks are the current driving forces of AI. They effectively combine computer programs with their data sets. While this is beneficial in many ways, there is a risk that these data sets will produce biased results that could be detrimental to us all.
Imagine the potential damage in algorithmic bias in, say, financial institutions and the government. There already are recorded cases wherein data sets have caused algorithmic biases that we will discuss further below.
Good Vs. Harm
Pros and cons.
Like every other technology, the main purpose of AI is to help people lead longer and better lives. This is good, and therefore insofar as AI helps people in these ways, we can be glad and appreciate the benefits it gives to us.
A perfectly well functioning technology, such as a nuclear weapon, can, when put to its intended use, cause immense harm. AI, like human intelligence, will be used maliciously, there is no doubt.
The good and the bad.
ML development and models require colossal amounts of energy to train and sustain. The costs reach tens of millions and even more. If this energy is sourced from fossil fuels, the environmental impact will be astronomical. Climate change and other harmful environmental changes are already alarmingly high.
The good news is that ML and AI can also be used to reverse or prevent further harmful environmental effects through technology that is focused on energy efficiency.
New Technologies Are Always Created For The Sake Of Something Good
AI and ML offer us amazing new abilities to help people and make the world a better place.
But in order to make the world a better place, we need to choose to do that, in accord with ethics.
Through the concerted effort of many individuals and organisations, we can hope that AI technology will help us to make a better world.
Want to learn more about how you can use your data to improve business decisions and discover new opportunities? Talk to our data experts today. Contact us.