This is the
talk page for discussing improvements to the
Machine ethics article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google ( books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
![]() | This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||||
|
This article was the subject of a Wiki Education Foundation-supported course assignment, between 27 August 2019 and 6 December 2019. Further details are available
on the course page. Student editor(s):
Goflores,
Avnishna,
Najimene. Peer reviewers:
Hannibalrising94,
Ahnmelis,
BorizAlva.
Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT ( talk) 00:23, 18 January 2022 (UTC)
The article is in dire need of copyediting on the tone, style, and balance of POV. There is significant research in this field well beyond the cluster of sources that are being presented here. I plan to assist in development of this page based on solid, independent sources. As it stands, several sections, if not the entire article, need to be rewritten completely and balanced out. It appears to present two or three authors, who have a working relation (thereby making it a non-independent set of sources as per WP:IS) and are presenting their views with WP:UNDUE weight. -- ☯Lightbound☯ talk 19:25, 31 March 2014 (UTC)
Following content was added today. To me this is WP:OR. please discuss, thanks
Some researchers have argued that an intelligence explosion could take place in the near future. [2] [3] [4] The resulting AI could have very high impact, [5] [6] but must not necessarily share our values. [7] It is therefore important to know how to avoid catastrophical impact from an AI that conflicts with our values before an intelligence explosion. Furthermore, a "friendly" artificial intelligence could also have an ethically positive impact. [8] [9]
Artificial systems have been pointed out to make ethically critical decisions already. [10]
Machine ethics and formalization of moral intuitions has been argued to inspire ethical discussion in general. [8] [11]
Machine ethics is often criticized as not being useful right now. For example, Andrew Ng argued that accidentally building human-level "evil" AI is possible, but "just so far away that I don’t know how to productively work on that." [12]
In a review of Nick Bostrom's book Superintelligence, Ernest Davis, computer science professor at the university of New York, pointed out that if a superintelligence can understand the mind of humans, the value loading problem may become trivial, since one could simply tell the AI to do what humans want. [13]
References
{{
cite journal}}
: Unknown parameter |month=
ignored (
help)
{{
citation}}
: Unknown parameter |agency=
ignored (
help)
{{
cite book}}
: |access-date=
requires |url=
(
help)
{{
citation}}
: |access-date=
requires |url=
(
help)
{{
citation}}
: line feed character in |title=
at position 42 (
help)
{{
citation}}
: Unknown parameter |agency=
ignored (
help)
{{
citation}}
: Unknown parameter |fist2=
ignored (
help)
{{
citation}}
: line feed character in |chapter=
at position 42 (
help)
I did not conduct original research for this edit. Could you maybe elaborate? (Rereading, I think the following sentence is the only one that is really problematic: "It is therefore important to know how to avoid catastrophical impact from an AI that conflicts with our values before an intelligence explosion," especially since it is not followed by a note.) - Hkfscp11 — Preceding undated comment added 15:33, 6 March 2015 (UTC)
The main change I plan to add is a section about "algorithmic fairness," which is a new research topic in machine learning. I go through all the relative wiki and think Machine Ethics is the most suitable one. All the new citations are recent academic papers and reliable fact reports on the machine learning bias.
Here is the proposed content:
The rapid development of AI technologies makes Algorithmic Fairness an important topic in Machine ethics.
[1] Algorithms increasingly affect decisions in high-stakes tasks, such as credit card applications
[2]
, hiring decisions
[3], college admissions, and criminal sentencing
[4]. In 2015, the Obama Administration's Big Data Working Group released several reports arguing “
the potential of encoding discrimination in automated decisions” and calling for “
equal opportunity by design” for applications such as credit scoring.
There are concerns that these technologies may introduce new biases or perpetuate existing prejudice and unfairness, either with or without intention [4]. Despite these concerns, both research and industry practice are lagging behind [5]. One common approach in the current practice is avoiding all protected attributes such as race, color, religion, gender, disability, or family status in machine learning features. However, this is problematic due to the encoding redundancy. These protected attributes can be inferred from the other features. For example, the feature combination of zip code and income [6] may connect to race demographics.
A recent study about existing Criminal Justice Systems further confirms the importance of that algorithmic fairness [4]. At various points in the criminal justice system, including decisions about bail, sentencing, or parole, an officer of the court may use quantitative risk tools to assess a defendant’s probability of recidivism based on their history and other attributes. ProPublica analyzed a commonly used statistical method for assigning risk scores in the criminal justice system - the COMPAS risk tool [7] - and argued that it was biased against African-American defendants. In that system, African-American defendants were more likely to be incorrectly labeled as higher-risk than they were, while white defendants were more likely to be incorrectly labeled as lower-risk than they were [4].
References
{{
cite journal}}
: Cite journal requires |journal=
(
help)
{{
cite journal}}
: Cite journal requires |journal=
(
help)
-- Shift-3 ( talk) 14:28, 28 April 2017 (UTC)
Hello fellow Wikipedians,
I have just modified 5 external links on Machine ethics. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— InternetArchiveBot ( Report bug) 12:51, 11 January 2018 (UTC)
I suggest to clarify somewhere the distinction between machine ethics and AI alignment. Is AI alignment a subset of machine ethics? Alenoach ( talk) 19:08, 25 February 2024 (UTC)
This is the
talk page for discussing improvements to the
Machine ethics article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google ( books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
![]() | This article is rated Start-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||||
|
This article was the subject of a Wiki Education Foundation-supported course assignment, between 27 August 2019 and 6 December 2019. Further details are available
on the course page. Student editor(s):
Goflores,
Avnishna,
Najimene. Peer reviewers:
Hannibalrising94,
Ahnmelis,
BorizAlva.
Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT ( talk) 00:23, 18 January 2022 (UTC)
The article is in dire need of copyediting on the tone, style, and balance of POV. There is significant research in this field well beyond the cluster of sources that are being presented here. I plan to assist in development of this page based on solid, independent sources. As it stands, several sections, if not the entire article, need to be rewritten completely and balanced out. It appears to present two or three authors, who have a working relation (thereby making it a non-independent set of sources as per WP:IS) and are presenting their views with WP:UNDUE weight. -- ☯Lightbound☯ talk 19:25, 31 March 2014 (UTC)
Following content was added today. To me this is WP:OR. please discuss, thanks
Some researchers have argued that an intelligence explosion could take place in the near future. [2] [3] [4] The resulting AI could have very high impact, [5] [6] but must not necessarily share our values. [7] It is therefore important to know how to avoid catastrophical impact from an AI that conflicts with our values before an intelligence explosion. Furthermore, a "friendly" artificial intelligence could also have an ethically positive impact. [8] [9]
Artificial systems have been pointed out to make ethically critical decisions already. [10]
Machine ethics and formalization of moral intuitions has been argued to inspire ethical discussion in general. [8] [11]
Machine ethics is often criticized as not being useful right now. For example, Andrew Ng argued that accidentally building human-level "evil" AI is possible, but "just so far away that I don’t know how to productively work on that." [12]
In a review of Nick Bostrom's book Superintelligence, Ernest Davis, computer science professor at the university of New York, pointed out that if a superintelligence can understand the mind of humans, the value loading problem may become trivial, since one could simply tell the AI to do what humans want. [13]
References
{{
cite journal}}
: Unknown parameter |month=
ignored (
help)
{{
citation}}
: Unknown parameter |agency=
ignored (
help)
{{
cite book}}
: |access-date=
requires |url=
(
help)
{{
citation}}
: |access-date=
requires |url=
(
help)
{{
citation}}
: line feed character in |title=
at position 42 (
help)
{{
citation}}
: Unknown parameter |agency=
ignored (
help)
{{
citation}}
: Unknown parameter |fist2=
ignored (
help)
{{
citation}}
: line feed character in |chapter=
at position 42 (
help)
I did not conduct original research for this edit. Could you maybe elaborate? (Rereading, I think the following sentence is the only one that is really problematic: "It is therefore important to know how to avoid catastrophical impact from an AI that conflicts with our values before an intelligence explosion," especially since it is not followed by a note.) - Hkfscp11 — Preceding undated comment added 15:33, 6 March 2015 (UTC)
The main change I plan to add is a section about "algorithmic fairness," which is a new research topic in machine learning. I go through all the relative wiki and think Machine Ethics is the most suitable one. All the new citations are recent academic papers and reliable fact reports on the machine learning bias.
Here is the proposed content:
The rapid development of AI technologies makes Algorithmic Fairness an important topic in Machine ethics.
[1] Algorithms increasingly affect decisions in high-stakes tasks, such as credit card applications
[2]
, hiring decisions
[3], college admissions, and criminal sentencing
[4]. In 2015, the Obama Administration's Big Data Working Group released several reports arguing “
the potential of encoding discrimination in automated decisions” and calling for “
equal opportunity by design” for applications such as credit scoring.
There are concerns that these technologies may introduce new biases or perpetuate existing prejudice and unfairness, either with or without intention [4]. Despite these concerns, both research and industry practice are lagging behind [5]. One common approach in the current practice is avoiding all protected attributes such as race, color, religion, gender, disability, or family status in machine learning features. However, this is problematic due to the encoding redundancy. These protected attributes can be inferred from the other features. For example, the feature combination of zip code and income [6] may connect to race demographics.
A recent study about existing Criminal Justice Systems further confirms the importance of that algorithmic fairness [4]. At various points in the criminal justice system, including decisions about bail, sentencing, or parole, an officer of the court may use quantitative risk tools to assess a defendant’s probability of recidivism based on their history and other attributes. ProPublica analyzed a commonly used statistical method for assigning risk scores in the criminal justice system - the COMPAS risk tool [7] - and argued that it was biased against African-American defendants. In that system, African-American defendants were more likely to be incorrectly labeled as higher-risk than they were, while white defendants were more likely to be incorrectly labeled as lower-risk than they were [4].
References
{{
cite journal}}
: Cite journal requires |journal=
(
help)
{{
cite journal}}
: Cite journal requires |journal=
(
help)
-- Shift-3 ( talk) 14:28, 28 April 2017 (UTC)
Hello fellow Wikipedians,
I have just modified 5 external links on Machine ethics. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— InternetArchiveBot ( Report bug) 12:51, 11 January 2018 (UTC)
I suggest to clarify somewhere the distinction between machine ethics and AI alignment. Is AI alignment a subset of machine ethics? Alenoach ( talk) 19:08, 25 February 2024 (UTC)