25 May, Saturday
14° C

The library of essays of Proakatemia

AI ethics

Kirjoittanut: Marcos Homar Heinonen - tiimistä SYNTRE.

Esseen tyyppi: Yksilöessee / 2 esseepistettä.
Esseen arvioitu lukuaika on 4 minuuttia.

Ethics is a set of moral principles which makes us differentiate between right and wrong. AI ethics is a multidisciplinary area that investigates ways to maximize AI´s positive influence while minimizing risks and negative outcomes. These AI issues include data responsibility and privacy, fairness, explainability, robustness, transparency, environmental sustainability, inclusion, moral agency, value alignment, accountability, trust, and technology misuse.

Rules and protocols are constantly developing to manage the use of AI. The Belmont report has been developed by the academic community to guide ethics within the development of algorithms and experimental research. The Belmont Report contains three principles serving as a guide for experiments and algorithm design which are the following.

Respect for persons: This idea recognizes people´s freedom and maintains an expectation for researchers to protect those with deficient self-determination, which can be caused by due to a range of factors such as disease, mental limitation, or age limits. This principle is largely concerned with the concept of consent. Individuals should be aware of the possible dangers and advantages of any experiment they are a part of and should be allowed to leave at any point before and during the experiment.

Beneficence: This idea comes from medical ethics, where doctors make an agreement to do no harm. This concept is easily applicable to artificial intelligence, as computers can fuel discrimination based on race, political views, gender, and so on, despite the best intentions of helping and enhancing a particular system.

Justice: This concept addresses concerns such as justice and equality. Who should gain from experimentation and machine learning? The Belmond Report suggests five approaches to divide responsibilities and benefits. These five are equal share, individual need, individual effort, societal contribution, and merit.


Foundation models

ChatGPT was a true inflection point on what AI can do. Its release in 2022 opened possibilities for all industries and showed how AI can be used in almost all industries.  ChatGPT is an AI built on foundation models. These AI models can be adapted to do a wide range of downstream tasks. Foundation models are typically large-scale generative models with billions of parameters that are self-supervised based on data without labels. This enables foundation models to quickly pass on what they have learned from one source to another. This makes them extremely versatile and capable of performing a wide range of tasks. However, the tech sector is aware of a variety of possible problems and ethical difficulties with foundation models. These include prejudice, the creation of fake material, a lack of explainability, abuse, and social effects. Although many of these problems apply to AI in general, they become much more urgent given the strength and accessibility of foundation models.

The impact of AI on jobs is seen only as job loss. This should be rethought as what happens with every new technology advancement the market demand for specific work roles changes. Artificial intelligence will shift the demand of jobs to other areas. There will be a lot of free workplaces around managing AI systems. The amount of data grows and changes constantly making a higher demand for these work areas. Resources will still be required to handle more complicated issues in the sectors of the economy that are most likely to experience changes in the demand for jobs, such as customer service. The key to artificial intelligence´s impact on the labor market is that it will facilitate people´s shift to these new in-demand fields.


Data privacy

Since data privacy, security, and protection are often mentioned together with privacy, legislators have been able to advance this area in recent years. For instance, the General Data Protection Regulation (GDPR) law was developed in 2016 to provide individuals with greater control over their data while safeguarding the personal information of citizens in the European Union and European Economic Area. Individual states in the US are creating laws, like the Californian Consumer Privacy Act (CCPA), requiring companies to notify customers when their data is being collected. Companies are now required by law to reconsider how they handle and maintain personally identifiable information. Because of this, companies are placing a higher premium on safety measures as they attempt to eliminate any opportunities and vulnerabilities for surveillance, hacking, and cyberattacks.

While there isn’t yet a single, complete law that governs AI activities, several nations and governments are attempting to create and put into place regional laws. There are now a few AI regulations in operation, but many more to come. In order to close this knowledge gap, ethicists and researchers have worked together to develop ethical frameworks that regulate the creation and use of AI models in the community. According to studies, the combination of divided accountability between humans and machines and lack of foresight into potential effects isn’t always helpful in minimizing public harm.

In conclusion, the rapidly advancing field of artificial intelligence brings out profound ethical considerations that demand careful reflection and responsible administration. As we navigate the unseen territory of AI, it is vital to prioritize ethical principles that keep safe human well-being, privacy, and social values. Striking and delicate balance between technological innovation and ethical accountability is essential to ensure that AI serves the common good. As we move forward, a collaborative effort from those in power, technologists, ethicists, and the rest of society is crucial to shape a future where AI aligns with our morals promoting a world that benefits all.










Post a Comment