![]() | This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 |
The category Structured Data Mining is missing. See summarization Especially the sub-categories are also missing:
Two important books are:
"At a general level, there are two types of learning: inductive, and deductive."
What's deductive learning? Isn't learning inductive? -- Took 01:48, 10 April 2006 (UTC)
From a purely writing view, the rest of the paragraph (after the above quote) goes on to explain what inductive machine learning is, but deductive machine learning isn't covered at all. -- Ferris37 03:49, 9 July 2006 (UTC)
Should this article link to the "radial basis function" article, instead of linking to the two articles "radial" and "basic function"?
It's a minor, but I see in this article the format of the references is inconsistent. Bishop is cited once as Christopher M. Bishop and another one as Bishop, C.M. Is there a standard format for wikipedia references? Jose
Some people, mainly researchers of this field (ML) are blogging about this subject. Some blogs are really interesting. Is there a space in an encyclopedia for links to those blogs ? I can see 3 pbs with this:
What do you think of adding a blog links section ? Dangauthier 14:11, 13 March 2006 (UTC)
I deleted the link to a supposed ML blog [1] which wasn't relevant, and was not in english.
suggestion = archive bin required Sanjiv swarup ( talk) 07:44, 17 September 2008 (UTC)
Is there any reason that the See Also section is formatted in columns? Or was that just the result of some vestigial code... WDavis1911 ( talk) 20:38, 27 July 2009 (UTC)
On this page, and the main unsupervised learning page, the phrase "labeled examples" is not explained or defined before being used. Can somebody come up with a concise definition? -- Bcjordan ( talk) 16:31, 15 September 2009 (UTC)
Hi,
In the following context As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to "learn" ,no definition of the last word in the sentense - "learn" - is given. However, it appears very essential, because it's central to this main definition.
A definition like "machine learning is an algorithm that allows machines to learn" sounds to me like a perfectly tautologous definition.
It's my understading that this article is about either computer science, or mathematics, or statistics, or some other "exact" discipline. All of these disciplines have quite exact definitions of everything, exept for those very few undefined terms that are declared upfront as axioms or undefined concepts. Examples: point, set, "Axiom of choice".
In this article, the purpose of Machine Learning and the tools it uses are clear to me as a reader. But the very method is obscure - what exactly it means for a machine to 'learn'. Would somebody please define "learn" in precise terms without resortiong to other obscure and not exactly defined in the technical world words like 'understand' or 'intelligence'?
There must exist a formal definition of 'learn', but if not, then, in my opinion, in order to avoid confusion, it should be clearly stated upfront that the very subject of machine learning is not clearly defined.
Compare this, for example, to how 'mathematics' is defined, or how the functions of ASIMO robot are clearly defined in Wikipedia.
Thanks in advance, Raokramer 13:28, 8 October 2007 (UTC)
There are formal definitions of what "learn" means. Basically it is about generalizing from a finite set of training examples, to allow the learning agent to do something (e.g. make a prediction, a classification, predict a probability, find a good representation) well (according to some mathematically defined criterion, such as prediction error) on new examples (that have something in common with the training examples, e.g., typically they are assumed to come from the same underlying distribution).
Yoshua Bengio March 26th, 2011. —Preceding undated comment added 01:18, 26 March 2011 (UTC).
Does anyone think snipping the FR section (and moving it here) would encourage people to actually write something? -- Adoniscik( t, c) 02:40, 13 October 2008 (UTC)
In this diff, the "Bibliography" section was converted to "Further reading". Looking at the history, it's clearly an aggregation of actual sources with other things just added for the heck of it. It is sometimes possible to see what an editor was adding when he added a source there, so there are good clues for how we could go away citing sources for the contents of the article. It's too bad it developed so far so early, before there was much of an ethic of actually citing sources, because now it will be a real pain to fix. Anyone up for working on it? Dicklyon ( talk) 18:56, 10 April 2011 (UTC)
This article should definitely link to pattern recognition. And I feel there should be some discussion on what belongs on pattern recognition and what on machine learning. T3kcit ( talk) 06:21, 23 August 2011 (UTC)
Do all learning algorithms perform search? All rule/decision-tree algorithms certainly do search. Are there any exceptions?
Are there any other exceptions? Pgr94 ( talk) 12:31, 16 April 2008 (UTC)
One capability central to many kinds of learning is the ability to generalize [...] The purpose of this paper is to compare various approaches to generalization in terms of a single framework. Toward this end, generalization is cast as a search problem, and alternative methods for generalization are characterized in terms of search strategies that they employ. [...] Conclusion: The problem of generalization may be viewed as a search problem involving a large hypothesis space of generalizations. [...] Generalization as search, Tom Mitchell, Artificial Intelligence (1982) doi: 10.1016/0004-3702(82)90040-6
Is "representation learning" sufficiently notable to warrant a subsection? The machine learning journal and journal of machine learning research have no articles with "representation learning" in the title. Does anyone have any machine learning textbooks with a chapter on the topic (none of mine do)? There is no wikipedia article on the subject. Any objections to deleting? pgr94 ( talk) 22:38, 15 August 2011 (UTC)
Recently I've heard the term Adversarial Machine Learning a few times but I can't find anything about it on Wikipedia. Is this a real field which should be covered in this article, or even get its own article? — Hippietrail ( talk) 07:47, 29 July 2012 (UTC)
The lead section of the article is badly-written and very confusing.
“ | Machine learning, a branch of artificial intelligence, is a scientific discipline concerned with the design and development of algorithms that take as input empirical data, such as that from sensors or databases, and yield patterns or predictions thought to be features of the underlying mechanism that generated the data. A learner can take advantage of examples (data) to capture characteristics of interest of their unknown underlying probability distribution. Data can be seen as instances of the possible relations between observed variables. A major focus of machine learning research is the design of algorithms that recognize complex patterns and make intelligent decisions based on input data. One fundamental difficulty is that the set of all possible behaviors given all possible inputs is too large to be included in the set of observed examples (training data). Hence the learner must generalize from the given examples in order to produce a useful output in new cases. | ” |
For example, the word "learner" is introduced without any context. For another example, the beginning sentence is very long and meandering. Finally, the end sentence is very poorly explained and seems to be a detail which does not belong in a lead section. A lot of words are tagged on. This lead certainly does not summarize the article. Thus I am tagging this article. JoshuSasori ( talk) 06:09, 28 September 2012 (UTC)
I recently read an article about distance metric learning (jmlr.csail.mit.edu/papers/volume13/ying12a/ying12a.pdf) and it appears that there should be a section dedicated to preprocessing techniques. Distance metric learning has to do with learning a Mahalanobis distance which describes whether samples are similar or not. One could proceed to transform the data into a space where irrelevant variation is minimized and the variation that is correlated to the learning task is preserved (relevant component analysis). I think feature selection/extraction should also be mentioned.
I believe a brief section discussing preprocessing and linking to the relevant sections would be beneficial. However, such a change should have the support of the community. Please comment and provide your opinions. — Preceding unsigned comment added by 150.135.222.151 ( talk) 22:36, 28 September 2012 (UTC)
"Algorithm Types" should probably not link to Taxonomy. It is simpler and more precise to say "machine learning algorithms can be categorized by different qualities." StatueOfMike ( talk) 18:18, 26 February 2013 (UTC)
I find the "Algorithm Types" section very help for providing context for the rest of the article. I propose adding a section/subsection "Problem Types" to provide a more complete context. For example. many portions of the rest of the article will say something like "is supervised learning method used for classification and regression". "Supervised Learning" is explained somewhat under the "Algorithm Types" section, but the problem types are not. Structured learning already has a good breakdown of problem types in machine learning. We could incorporate that here, and hopefully expand on it. StatueOfMike ( talk) 23:12, 8 February 2013 (UTC)
I find the machine learning page pretty good. However, the distinction between machine learning and data mining presented in this article is misleading and probably not right. The terms 'data mining' and 'machine learning' are used interchangeably by the masters of the field along with plenty of us regular practitioners. The distinction presented in this article--that one deals with knowns and the other with unknowns--just isn't right. I'm not sure how to be positive about it. Data mining and machine learning engage in dealing with both knowns and unknowns because they're both really the same thing.
My primary source for there being no difference between the terms is the author of the definitive and most highly cited machine learning/data mining text, "Machine Learning" ( Mitchell, Tom M. Burr Ridge, IL: McGraw Hill, 1997), Carnegie Mellon Machine Learning Department chief, Tom Mitchell. Mitchell actually tackles head-on the lack of real distinction between the terms in a paper he published for Communications of the ACM, published in 1999 ( http://dl.acm.org/citation.cfm?id=319388). I've also been in the field for a number of years and support Mitchell's unwillingness to distinguish the two.
Now, I can *imagine* that when we use the term 'data mining' we are also including 'web mining' under the umbrella of 'data mining.' We mining is a task that may involve data extraction performed without learning algorithms. 'Machine learning' places emphasis on the algorithmic learning aspect of mining. The widely used Weka text written by Witten and Frank does differentiate the two terms in this way. But more than a few of us in the community felt that when that text came out, as useful as it is for using Weka and teaching neophytes, the distinction was without precedent. It struck us as something the authors invented while writing the book's first edition. Their distinction is more along the learning versus extraction distinction, but that's a false distinction as learning is often used for extraction for structuring data, and learning patterns in a data set is always a sort of "extraction," "discovery," etc. But even Witten and Frank aren't suggesting that one is more for unknowns and the other for knowns, or one is more for prediction and the other for description. Data mining/machine learning is used in a statistical framework, where statistics is quite clearly a field dedicated to handling uncertainty, which is to say it's hard to predict, forecast, or understand the patterns within data.
I feel that 'data mining' should redirect to 'machine learning,' or 'machine learning' redirect to 'data mining,' the section distinguishing the two should be removed, and the contents of the two pages merged. Textminer ( talk) 21:44, 11 May 2013 (UTC)
There is no discussion of validation, over-fit and the bias/variance tradeoff. To me this is the whole point and the reason why wide data problems are so elusive. Izmirlig ( talk)
— Preceding unsigned comment added by Izmirlig ( talk • contribs) 18:42, 12 September 2013 (UTC)
I modified the strong claim that Machine Learning systems try to create programs without an engineer's intutition. When a machine learning task is specified, a human decides how the data are to be represented (e.g. which attributes will be used or how the data need to be preprocessed). This is the "observation language". The designer also decides the "hypothesis language", i.e. how the learned concept will be represented. Decision trees, neural nets, SVMs all have subtlety different ways of describing the learned concept. The designer also decides on the kind of search that will be used, which biases the end result.
The way the page is written now, there is no distinguishing between machine learning and pattern recognition. machine learning is much more than simple classification. Robots that learn how to act in groups is machine learning but not pattern recognition. I am not an expert at ML, but am an expert in pattern recognition. So I hope that someone will edit this page and put in more information about machine learning that is not also pattern recognition.
>You could call evolution a kind of "intelligence"
No. Evolution is not goal-directed.
Blaise 17:32, 30 Apr 2005 (UTC)
Unlike many in the ML community, who want to find computationally lightweight algorithms that scale to very large data sets, many statisticians are currently interested in computationally intensive algorithms. (We're interested in getting models that are as faithful as possible to the situation, and we generally work with smaller data sets, so the scaling isn't such a big issue.) The point I'm making is that the statement that "ML is synonymous with computational statistics" is just plain wrong.
Blaise 17:29, 30 Apr 2005 (UTC)
From the lede:
This doesn't cover transductive learning, where the data are finite and available upfront, but the pattern is unknown. Much unsupervised learning (clustering, topic modeling) follows this pattern as well. QVVERTYVS ( hm?) 17:32, 23 July 2014 (UTC)
I just hedged the new GA section by stating, and proving with references, that "genetic algorithms found some uses in the 1980s and 1990s". But actually, I'd much rather remove the passage, because AFAIC very little serious work on GAs is done in the machine learning community as opposed to serious stuff like graphical models, convex optimization, and other topics that are much less sexy than "pseudobiology" (as Skiena put it). I think devoting a section, however short, to GAs and not to, say, gradient descent optimization, is an utter misrepresentation of the field. QVVERTYVS ( hm?) 17:07, 21 October 2014 (UTC)
Here are some figures to make my point more clearly. The only recent, reasonably well-cited paper on GAs in ML that I could find is
By comparison:
I picked these papers because they all discuss optimization. They represent the algorithms that are actually in use, i.e., SMO, L-BFGS, coordinate descent. Not GAs. QVVERTYVS ( hm?) 17:57, 21 October 2014 (UTC)
Furthermore, GAs do not appear at all in Bishop's Pattern Recognition and Machine Learning, one of the foremost textbooks in the field. QVVERTYVS ( hm?) 18:09, 21 October 2014 (UTC)
I would like to point out that mathematical optimization and machine learning are two completely different things. Genetic Algorithms and gradient descend are optimization algorithms (global and local respectively), which can be applied in contexts that have no connection to machine learning whatsoever (I can cite countless examples). When we talk about machine learning we talk about "training algorithms", not optimization algorithms. Many training algorithms are derived from and can be expressed as mathematical optimization problems (most typical examples being the Perceptron and SVM) and they apply some soft of optimization algorithm (gradient descend, GA, simulated annealing, etc) to solve those problems. You can solve the Linear Perceptron using the default Delta rule (which derives from gradient descend optimization) or you can solve it in a completely different manner using a Genetic Algorithm. The fact that machine learning uses optimization doesn't mean that an optimization algorithm is a machine learning (training) algorithm.
Delafé (
talk)
08:46, 11 February 2015 (UTC)
Machine learning was used in 2010 for breakthrough SSL by NSA. — Preceding
unsigned comment added by
92.153.180.35 (
talk)
21:58, 9 March 2015 (UTC)
Can somebody please provide the reference or context for arriving with following statement? "When employed in industrial contexts, machine learning methods may be referred to as predictive analytics or predictive modelling." I am of opinion that this statement implying that in Industry "Predictive analytics" or "Predictive modelling" is considered as machine learning methods and so Machine learning is basis for predictive analytics. That doesn't seem to be true and I believe Predictive analytics form the base for machine learning and for it's applications. And how can we club "Predictive Anlytics" and "Modelling" together as these methods are applied in different stages of Data processing/Utilization and are vary different from one another.
Thanks Naren ( talk) 12:05, 12 June 2015 (UTC)
Found this http://openclassroom.stanford.edu/MainFolder/VideoPage.php?course=MachineLearning&video=01.2-Introduction-WhatIsMachineLearning&speed=100 I think it's useful for the article-- Arado ( talk) 14:50, 2 July 2015 (UTC)
hi guys,
given that the great Sir Wiles has rejected the application of mathematics to finance, and machine learning itself is a manifestation of sophisticated mathematics, can we start the discussion about removing mentions of fields such as " computational finance" and/or " mathematical finance".
both fields, to me, have always felt dishonest and uncomfortable given their lack of rigorousness http://mathbabe.org/2013/10/06/sir-andrew-wiles-smacks-down-unethical-use-of-mathematics-for-profit/ it is long overdue for those who love to learn, to take a stand against the abuse of our beloved maths. 174.3.155.181 ( talk) 19:46, 2 April 2016 (UTC)
any thoughts by the *community* about hte relevance of some of the commercial software entries? i am thinking this list can be long if we start adding arbitrary software. i was wondering if people would be open to trimming the list or removing it all together. my thinking is that any prospective students should understand that this field is intense on mathematics, and while there is commercial appeal, much of the real work is done in the trenches.
things like google API and stuff can stay, obviously, but with the recent addition of a useless piece of software, i thought it'd be fruitful to have this discussion to prevent the list from growing.
there must be a healthy compromise that can be reached. — Preceding unsigned comment added by 174.3.155.181 ( talk) 18:25, 19 April 2016 (UTC)
This is because an optimizer constructed with sample data is a random variable, and the extreme value of the optimizer (minimum or maximun) cannot be more significant than other values of the optimizer. We should take the expectation of the optimizer to do statistical decision, e.g. model selection. Yuanfangdelang ( talk) 19:59, 30 August 2016 (UTC)
There seem to be few chips on the market that are self-learning. There's at least one being manufactured today, see here KVDP ( talk) 13:21, 9 May 2017 (UTC)
The definition by Arthur Samuel,(1959) seems to be non-existent. Some papers/books cite his key-paper on ML in Checkers-games (see: http://aitopics.org/sites/default/files/classic/Feigenbaum_Feldman/Computers_And_Thought-Part_1_Checkers.pdf) but that doesn't contain a definition whatsoever (better yet, it states "While this is not the place to dwell on the importance of machine-learning procedures, or to discourse on the philosophical aspects" p.71). So I wonder whether we should keep that definition in the wiki-page... Otherwise I'm happy to receive the source+page where that definition is stated :)
Agree with above - this is a clear problem, as the WP leading quote can be found in many, many places around the Internet (as of 2017) with no actual citation. I've marked that reference as "disputed", since it doesn't cite any actual paper. — Preceding unsigned comment added by 54.240.196.185 ( talk) 16:17, 14 August 2017 (UTC)
The second source added by User:HelpUsStopSpam is behind a paywall and so isn't clear on the content. Can you excerpt the exact phrase and context used in that paper? 54.240.196.171 ( talk) 18:53, 17 August 2017 (UTC)
Hello fellow Wikipedians,
I have just modified one external link on Machine learning. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— InternetArchiveBot ( Report bug) 12:51, 11 January 2018 (UTC)
The content of this article probably has some value but I suggest to merge it into main article as a small sub-section. Bbarmadillo ( talk) 21:08, 17 March 2018 (UTC)
I have recently added a section to include current research on a new machine learning method known as Linear Poisson Modelling. [1] [2] [3] [4] [5] [6]As this method has not been widely communicated, I can understand why some would rather not include such work on the main Machine Learning page at present. However, the method is now associated with more than a dozen co-authors in application-specific areas, so I believe it is worth noting. I have tentatively place this in a new section regarding new and developing methods. Perhaps other new and developing methods could be placed there too? What criteria should be considered before inclusion?— Preceding unsigned comment added by 82.23.74.236 ( talk) 17:26, 9 June 2018
References
Shouldn't reinforcement learning be a subset of unsupervised learning?
I reorganized the Approaches section to more accurately represent the parent-child relationships of machine learning articles, as described in WP:SUMMARY style guidelines, and added text where I could by borrowing it from the lead sections of the child articles. I deleted the reference to List of machine learning algorithms as the primary main article (right under the section name) because it is not a more detailed version of the Approaches section as a whole. It is the opposite, a condensed list with no details. In a couple of other places, there were links to "main articles" that were not in fact child articles, as the label was intended for. It makes more sense to me to consider broader topics like the types of learning algorithms, the processes/techniques, and the models/frameworks used in ML to be the direct "children" of the Approaches section, so I created those headings and then sorted the text between them. I hope this makes the text easier to understand, and grasp at a higher level of understanding. Romhilde ( talk) 02:44, 25 November 2018 (UTC)
A discussion is taking place as to whether Portal:Machine learning is suitable for inclusion in Wikipedia according to Wikipedia's policies and guidelines or whether it should be deleted.
The page will be discussed at Wikipedia:Miscellany for deletion/Portal:Machine learning until a consensus is reached, and anyone is welcome to contribute to the discussion. The nomination will explain the policies and guidelines which are of concern. The discussion focuses on high-quality evidence and our policies and guidelines.
Users may edit the page during the discussion, including to improve the page to address concerns raised in the discussion. However, do not remove the deletion notice from the top of the page. North America 1000 10:36, 12 July 2019 (UTC)
I just posted this image to the article.
I liked it because
Blue Rasberry (talk) 18:43, 24 September 2019 (UTC)
The first paragraph of this section is very good, IMHO, but the last two are problematic. The second seems random and a little unfinished. The third raises an important point, but saying that statistical learning arose because "[s]ome statisticians have adopted methods from machine learning" is arguably confusing the chicken with the egg. It should also be mentioned here that statistical machine learning is a relatively well-established term (cf. e.g. this book) which has a meaning somewhere in between machine learning and statistical learning. Thomas Tvileren ( talk) 13:07, 1 November 2018 (UTC)
The new first line "Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalizable predictive patterns" is wrong and is currently under discussion on social media. — Preceding unsigned comment added by 130.239.209.241 ( talk) 06:55, 9 December 2019 (UTC)
The claim “It is seen as a subset of artificial intelligence.” is wrong. Rephrased as “Methods from machine learning are used in some types of artificial intelligence.” would be correct. In particular, it is per definition not part of wet AI unless biological material are defined as “machines”. Artificial intelligence is about creating thinking machines, not just algorithmic description of learning strategies. (It is probably “narrow AI” creeping into the article, or robotic process automation (RPA) aka “robotics” aka business process automation, which is mostly just sales pitch and has very little to do with AI.)
It seems like all kind of systems with some small part of machine learning is claimed to be AI today, and it creeps into books and articles. Machine learning is pretty far from weak AI and very far from strong AI. It is more like a necessary tool to build a house, it is not the house. Jeblad ( talk) 03:18, 28 December 2019 (UTC)
could someone possibly add some thoughts on how randomness is needed for ml? https://ai.stackexchange.com/questions/15590/is-randomness-necessary-for-ai?newreg=70448b7751cd4731b79234915d4a1248
i wish i could do it, but i lack the expertise or the time to bring this up in Wikipedia style, as it is evident by this very post and the chain of links in it, if you care enough to dig.
cheers! 😁😘 16:11, 27 February 2020 (UTC) — Preceding unsigned comment added by Cregox ( talk • contribs)
both ideas sound interesting, but they both look like optional techniques rather than necessary tools.
in my mind and from my understanding machine learning would never exist without random number generators.
as i also mentioned in my link there, i'll basically just copy and paste it here:
perhaps we're missing words here. randomness is the apparent lack of pattern or predictability in events. the more predictable something is, the dumber it becomes. of course just a bunch of random numbers doesn't make anything intelligent. it's much to the opposite: randomness is an artifact of intelligence. but if while we're reverse engineering intelligence (making ai) we can in practice see it does need rng to exist, then there's evidently something there between randomness and intelligence that's not just artifacts. could we call it chaos? rather continue there: cregox.net/random 11:12, 21 August 2020 (UTC) — Preceding unsigned comment added by Cregox ( talk • contribs)
The below text is inaccurate and none of the cited references support the statement:
Yet some practitioners, for example, Dr Daniel Hulme, who teaches AI and runs a company operating in the field, argues that machine learning and AI are separate. [1] [2] This quoted reference states that ML is part of AI: [3]
They are both commonly used terms of the English language with meanings defined by such. Any claim that there is no overlap is an extra-ordinary implausible claim. A discussion within some specialized view within some specialized topic venue is no basis for such a broad claim. Also, the insertion looks like spam to insert the person into the article. North8000 ( talk) 13:45, 29 October 2020 (UTC)
There is no reference to the below definition and it is inaccurate as a definition of Machine learning should not include AI. There are many textbook definitions of machine learning which could be used.
The current text without reference: Simple Definition: Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it to learn for themselves.
Definition with reference: Machine learning is the study of computer algorithms that allow computer programs to automatically improve through experience.
Machine Learning, Tom Mitchell, McGraw Hill, 1997. http://www.cs.cmu.edu/afs/cs.cmu.edu/user/mitchell/ftp/mlbook.html — Preceding unsigned comment added by Tolgaors ( talk • contribs) 09:55, 29 October 2020 (UTC)
I think that a paragraph be added about the issues that can arise from maximizing a mis-aligned objective function. As an example, just take the ethical challenges arising from recommendation algorithms in social media, [4] with negative effects such as creating distrust in traditional information channels, [5] actively spreading misinformation, [6] and creating addiction. [7] -- MountBrew ( talk) 18:37, 20 November 2020 (UTC)
![]() | This edit request by an editor with a conflict of interest has now been answered. |
Under the "Proprietary" subheading, please add a link to PolyAnalyst. Sam at Megaputer ( talk) 02:14, 17 February 2021 (UTC)
Machine learning is not only neural networks! The old lead introduction from last year was much better without all the hype.
Alpaydin2020
was invoked but never defined (see the
help page).![]() | This is an archive of past discussions. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | Archive 2 |
The category Structured Data Mining is missing. See summarization Especially the sub-categories are also missing:
Two important books are:
"At a general level, there are two types of learning: inductive, and deductive."
What's deductive learning? Isn't learning inductive? -- Took 01:48, 10 April 2006 (UTC)
From a purely writing view, the rest of the paragraph (after the above quote) goes on to explain what inductive machine learning is, but deductive machine learning isn't covered at all. -- Ferris37 03:49, 9 July 2006 (UTC)
Should this article link to the "radial basis function" article, instead of linking to the two articles "radial" and "basic function"?
It's a minor, but I see in this article the format of the references is inconsistent. Bishop is cited once as Christopher M. Bishop and another one as Bishop, C.M. Is there a standard format for wikipedia references? Jose
Some people, mainly researchers of this field (ML) are blogging about this subject. Some blogs are really interesting. Is there a space in an encyclopedia for links to those blogs ? I can see 3 pbs with this:
What do you think of adding a blog links section ? Dangauthier 14:11, 13 March 2006 (UTC)
I deleted the link to a supposed ML blog [1] which wasn't relevant, and was not in english.
suggestion = archive bin required Sanjiv swarup ( talk) 07:44, 17 September 2008 (UTC)
Is there any reason that the See Also section is formatted in columns? Or was that just the result of some vestigial code... WDavis1911 ( talk) 20:38, 27 July 2009 (UTC)
On this page, and the main unsupervised learning page, the phrase "labeled examples" is not explained or defined before being used. Can somebody come up with a concise definition? -- Bcjordan ( talk) 16:31, 15 September 2009 (UTC)
Hi,
In the following context As a broad subfield of artificial intelligence, machine learning is concerned with the design and development of algorithms and techniques that allow computers to "learn" ,no definition of the last word in the sentense - "learn" - is given. However, it appears very essential, because it's central to this main definition.
A definition like "machine learning is an algorithm that allows machines to learn" sounds to me like a perfectly tautologous definition.
It's my understading that this article is about either computer science, or mathematics, or statistics, or some other "exact" discipline. All of these disciplines have quite exact definitions of everything, exept for those very few undefined terms that are declared upfront as axioms or undefined concepts. Examples: point, set, "Axiom of choice".
In this article, the purpose of Machine Learning and the tools it uses are clear to me as a reader. But the very method is obscure - what exactly it means for a machine to 'learn'. Would somebody please define "learn" in precise terms without resortiong to other obscure and not exactly defined in the technical world words like 'understand' or 'intelligence'?
There must exist a formal definition of 'learn', but if not, then, in my opinion, in order to avoid confusion, it should be clearly stated upfront that the very subject of machine learning is not clearly defined.
Compare this, for example, to how 'mathematics' is defined, or how the functions of ASIMO robot are clearly defined in Wikipedia.
Thanks in advance, Raokramer 13:28, 8 October 2007 (UTC)
There are formal definitions of what "learn" means. Basically it is about generalizing from a finite set of training examples, to allow the learning agent to do something (e.g. make a prediction, a classification, predict a probability, find a good representation) well (according to some mathematically defined criterion, such as prediction error) on new examples (that have something in common with the training examples, e.g., typically they are assumed to come from the same underlying distribution).
Yoshua Bengio March 26th, 2011. —Preceding undated comment added 01:18, 26 March 2011 (UTC).
Does anyone think snipping the FR section (and moving it here) would encourage people to actually write something? -- Adoniscik( t, c) 02:40, 13 October 2008 (UTC)
In this diff, the "Bibliography" section was converted to "Further reading". Looking at the history, it's clearly an aggregation of actual sources with other things just added for the heck of it. It is sometimes possible to see what an editor was adding when he added a source there, so there are good clues for how we could go away citing sources for the contents of the article. It's too bad it developed so far so early, before there was much of an ethic of actually citing sources, because now it will be a real pain to fix. Anyone up for working on it? Dicklyon ( talk) 18:56, 10 April 2011 (UTC)
This article should definitely link to pattern recognition. And I feel there should be some discussion on what belongs on pattern recognition and what on machine learning. T3kcit ( talk) 06:21, 23 August 2011 (UTC)
Do all learning algorithms perform search? All rule/decision-tree algorithms certainly do search. Are there any exceptions?
Are there any other exceptions? Pgr94 ( talk) 12:31, 16 April 2008 (UTC)
One capability central to many kinds of learning is the ability to generalize [...] The purpose of this paper is to compare various approaches to generalization in terms of a single framework. Toward this end, generalization is cast as a search problem, and alternative methods for generalization are characterized in terms of search strategies that they employ. [...] Conclusion: The problem of generalization may be viewed as a search problem involving a large hypothesis space of generalizations. [...] Generalization as search, Tom Mitchell, Artificial Intelligence (1982) doi: 10.1016/0004-3702(82)90040-6
Is "representation learning" sufficiently notable to warrant a subsection? The machine learning journal and journal of machine learning research have no articles with "representation learning" in the title. Does anyone have any machine learning textbooks with a chapter on the topic (none of mine do)? There is no wikipedia article on the subject. Any objections to deleting? pgr94 ( talk) 22:38, 15 August 2011 (UTC)
Recently I've heard the term Adversarial Machine Learning a few times but I can't find anything about it on Wikipedia. Is this a real field which should be covered in this article, or even get its own article? — Hippietrail ( talk) 07:47, 29 July 2012 (UTC)
The lead section of the article is badly-written and very confusing.
“ | Machine learning, a branch of artificial intelligence, is a scientific discipline concerned with the design and development of algorithms that take as input empirical data, such as that from sensors or databases, and yield patterns or predictions thought to be features of the underlying mechanism that generated the data. A learner can take advantage of examples (data) to capture characteristics of interest of their unknown underlying probability distribution. Data can be seen as instances of the possible relations between observed variables. A major focus of machine learning research is the design of algorithms that recognize complex patterns and make intelligent decisions based on input data. One fundamental difficulty is that the set of all possible behaviors given all possible inputs is too large to be included in the set of observed examples (training data). Hence the learner must generalize from the given examples in order to produce a useful output in new cases. | ” |
For example, the word "learner" is introduced without any context. For another example, the beginning sentence is very long and meandering. Finally, the end sentence is very poorly explained and seems to be a detail which does not belong in a lead section. A lot of words are tagged on. This lead certainly does not summarize the article. Thus I am tagging this article. JoshuSasori ( talk) 06:09, 28 September 2012 (UTC)
I recently read an article about distance metric learning (jmlr.csail.mit.edu/papers/volume13/ying12a/ying12a.pdf) and it appears that there should be a section dedicated to preprocessing techniques. Distance metric learning has to do with learning a Mahalanobis distance which describes whether samples are similar or not. One could proceed to transform the data into a space where irrelevant variation is minimized and the variation that is correlated to the learning task is preserved (relevant component analysis). I think feature selection/extraction should also be mentioned.
I believe a brief section discussing preprocessing and linking to the relevant sections would be beneficial. However, such a change should have the support of the community. Please comment and provide your opinions. — Preceding unsigned comment added by 150.135.222.151 ( talk) 22:36, 28 September 2012 (UTC)
"Algorithm Types" should probably not link to Taxonomy. It is simpler and more precise to say "machine learning algorithms can be categorized by different qualities." StatueOfMike ( talk) 18:18, 26 February 2013 (UTC)
I find the "Algorithm Types" section very help for providing context for the rest of the article. I propose adding a section/subsection "Problem Types" to provide a more complete context. For example. many portions of the rest of the article will say something like "is supervised learning method used for classification and regression". "Supervised Learning" is explained somewhat under the "Algorithm Types" section, but the problem types are not. Structured learning already has a good breakdown of problem types in machine learning. We could incorporate that here, and hopefully expand on it. StatueOfMike ( talk) 23:12, 8 February 2013 (UTC)
I find the machine learning page pretty good. However, the distinction between machine learning and data mining presented in this article is misleading and probably not right. The terms 'data mining' and 'machine learning' are used interchangeably by the masters of the field along with plenty of us regular practitioners. The distinction presented in this article--that one deals with knowns and the other with unknowns--just isn't right. I'm not sure how to be positive about it. Data mining and machine learning engage in dealing with both knowns and unknowns because they're both really the same thing.
My primary source for there being no difference between the terms is the author of the definitive and most highly cited machine learning/data mining text, "Machine Learning" ( Mitchell, Tom M. Burr Ridge, IL: McGraw Hill, 1997), Carnegie Mellon Machine Learning Department chief, Tom Mitchell. Mitchell actually tackles head-on the lack of real distinction between the terms in a paper he published for Communications of the ACM, published in 1999 ( http://dl.acm.org/citation.cfm?id=319388). I've also been in the field for a number of years and support Mitchell's unwillingness to distinguish the two.
Now, I can *imagine* that when we use the term 'data mining' we are also including 'web mining' under the umbrella of 'data mining.' We mining is a task that may involve data extraction performed without learning algorithms. 'Machine learning' places emphasis on the algorithmic learning aspect of mining. The widely used Weka text written by Witten and Frank does differentiate the two terms in this way. But more than a few of us in the community felt that when that text came out, as useful as it is for using Weka and teaching neophytes, the distinction was without precedent. It struck us as something the authors invented while writing the book's first edition. Their distinction is more along the learning versus extraction distinction, but that's a false distinction as learning is often used for extraction for structuring data, and learning patterns in a data set is always a sort of "extraction," "discovery," etc. But even Witten and Frank aren't suggesting that one is more for unknowns and the other for knowns, or one is more for prediction and the other for description. Data mining/machine learning is used in a statistical framework, where statistics is quite clearly a field dedicated to handling uncertainty, which is to say it's hard to predict, forecast, or understand the patterns within data.
I feel that 'data mining' should redirect to 'machine learning,' or 'machine learning' redirect to 'data mining,' the section distinguishing the two should be removed, and the contents of the two pages merged. Textminer ( talk) 21:44, 11 May 2013 (UTC)
There is no discussion of validation, over-fit and the bias/variance tradeoff. To me this is the whole point and the reason why wide data problems are so elusive. Izmirlig ( talk)
— Preceding unsigned comment added by Izmirlig ( talk • contribs) 18:42, 12 September 2013 (UTC)
I modified the strong claim that Machine Learning systems try to create programs without an engineer's intutition. When a machine learning task is specified, a human decides how the data are to be represented (e.g. which attributes will be used or how the data need to be preprocessed). This is the "observation language". The designer also decides the "hypothesis language", i.e. how the learned concept will be represented. Decision trees, neural nets, SVMs all have subtlety different ways of describing the learned concept. The designer also decides on the kind of search that will be used, which biases the end result.
The way the page is written now, there is no distinguishing between machine learning and pattern recognition. machine learning is much more than simple classification. Robots that learn how to act in groups is machine learning but not pattern recognition. I am not an expert at ML, but am an expert in pattern recognition. So I hope that someone will edit this page and put in more information about machine learning that is not also pattern recognition.
>You could call evolution a kind of "intelligence"
No. Evolution is not goal-directed.
Blaise 17:32, 30 Apr 2005 (UTC)
Unlike many in the ML community, who want to find computationally lightweight algorithms that scale to very large data sets, many statisticians are currently interested in computationally intensive algorithms. (We're interested in getting models that are as faithful as possible to the situation, and we generally work with smaller data sets, so the scaling isn't such a big issue.) The point I'm making is that the statement that "ML is synonymous with computational statistics" is just plain wrong.
Blaise 17:29, 30 Apr 2005 (UTC)
From the lede:
This doesn't cover transductive learning, where the data are finite and available upfront, but the pattern is unknown. Much unsupervised learning (clustering, topic modeling) follows this pattern as well. QVVERTYVS ( hm?) 17:32, 23 July 2014 (UTC)
I just hedged the new GA section by stating, and proving with references, that "genetic algorithms found some uses in the 1980s and 1990s". But actually, I'd much rather remove the passage, because AFAIC very little serious work on GAs is done in the machine learning community as opposed to serious stuff like graphical models, convex optimization, and other topics that are much less sexy than "pseudobiology" (as Skiena put it). I think devoting a section, however short, to GAs and not to, say, gradient descent optimization, is an utter misrepresentation of the field. QVVERTYVS ( hm?) 17:07, 21 October 2014 (UTC)
Here are some figures to make my point more clearly. The only recent, reasonably well-cited paper on GAs in ML that I could find is
By comparison:
I picked these papers because they all discuss optimization. They represent the algorithms that are actually in use, i.e., SMO, L-BFGS, coordinate descent. Not GAs. QVVERTYVS ( hm?) 17:57, 21 October 2014 (UTC)
Furthermore, GAs do not appear at all in Bishop's Pattern Recognition and Machine Learning, one of the foremost textbooks in the field. QVVERTYVS ( hm?) 18:09, 21 October 2014 (UTC)
I would like to point out that mathematical optimization and machine learning are two completely different things. Genetic Algorithms and gradient descend are optimization algorithms (global and local respectively), which can be applied in contexts that have no connection to machine learning whatsoever (I can cite countless examples). When we talk about machine learning we talk about "training algorithms", not optimization algorithms. Many training algorithms are derived from and can be expressed as mathematical optimization problems (most typical examples being the Perceptron and SVM) and they apply some soft of optimization algorithm (gradient descend, GA, simulated annealing, etc) to solve those problems. You can solve the Linear Perceptron using the default Delta rule (which derives from gradient descend optimization) or you can solve it in a completely different manner using a Genetic Algorithm. The fact that machine learning uses optimization doesn't mean that an optimization algorithm is a machine learning (training) algorithm.
Delafé (
talk)
08:46, 11 February 2015 (UTC)
Machine learning was used in 2010 for breakthrough SSL by NSA. — Preceding
unsigned comment added by
92.153.180.35 (
talk)
21:58, 9 March 2015 (UTC)
Can somebody please provide the reference or context for arriving with following statement? "When employed in industrial contexts, machine learning methods may be referred to as predictive analytics or predictive modelling." I am of opinion that this statement implying that in Industry "Predictive analytics" or "Predictive modelling" is considered as machine learning methods and so Machine learning is basis for predictive analytics. That doesn't seem to be true and I believe Predictive analytics form the base for machine learning and for it's applications. And how can we club "Predictive Anlytics" and "Modelling" together as these methods are applied in different stages of Data processing/Utilization and are vary different from one another.
Thanks Naren ( talk) 12:05, 12 June 2015 (UTC)
Found this http://openclassroom.stanford.edu/MainFolder/VideoPage.php?course=MachineLearning&video=01.2-Introduction-WhatIsMachineLearning&speed=100 I think it's useful for the article-- Arado ( talk) 14:50, 2 July 2015 (UTC)
hi guys,
given that the great Sir Wiles has rejected the application of mathematics to finance, and machine learning itself is a manifestation of sophisticated mathematics, can we start the discussion about removing mentions of fields such as " computational finance" and/or " mathematical finance".
both fields, to me, have always felt dishonest and uncomfortable given their lack of rigorousness http://mathbabe.org/2013/10/06/sir-andrew-wiles-smacks-down-unethical-use-of-mathematics-for-profit/ it is long overdue for those who love to learn, to take a stand against the abuse of our beloved maths. 174.3.155.181 ( talk) 19:46, 2 April 2016 (UTC)
any thoughts by the *community* about hte relevance of some of the commercial software entries? i am thinking this list can be long if we start adding arbitrary software. i was wondering if people would be open to trimming the list or removing it all together. my thinking is that any prospective students should understand that this field is intense on mathematics, and while there is commercial appeal, much of the real work is done in the trenches.
things like google API and stuff can stay, obviously, but with the recent addition of a useless piece of software, i thought it'd be fruitful to have this discussion to prevent the list from growing.
there must be a healthy compromise that can be reached. — Preceding unsigned comment added by 174.3.155.181 ( talk) 18:25, 19 April 2016 (UTC)
This is because an optimizer constructed with sample data is a random variable, and the extreme value of the optimizer (minimum or maximun) cannot be more significant than other values of the optimizer. We should take the expectation of the optimizer to do statistical decision, e.g. model selection. Yuanfangdelang ( talk) 19:59, 30 August 2016 (UTC)
There seem to be few chips on the market that are self-learning. There's at least one being manufactured today, see here KVDP ( talk) 13:21, 9 May 2017 (UTC)
The definition by Arthur Samuel,(1959) seems to be non-existent. Some papers/books cite his key-paper on ML in Checkers-games (see: http://aitopics.org/sites/default/files/classic/Feigenbaum_Feldman/Computers_And_Thought-Part_1_Checkers.pdf) but that doesn't contain a definition whatsoever (better yet, it states "While this is not the place to dwell on the importance of machine-learning procedures, or to discourse on the philosophical aspects" p.71). So I wonder whether we should keep that definition in the wiki-page... Otherwise I'm happy to receive the source+page where that definition is stated :)
Agree with above - this is a clear problem, as the WP leading quote can be found in many, many places around the Internet (as of 2017) with no actual citation. I've marked that reference as "disputed", since it doesn't cite any actual paper. — Preceding unsigned comment added by 54.240.196.185 ( talk) 16:17, 14 August 2017 (UTC)
The second source added by User:HelpUsStopSpam is behind a paywall and so isn't clear on the content. Can you excerpt the exact phrase and context used in that paper? 54.240.196.171 ( talk) 18:53, 17 August 2017 (UTC)
Hello fellow Wikipedians,
I have just modified one external link on Machine learning. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018.
After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than
regular verification using the archive tool instructions below. Editors
have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the
RfC before doing mass systematic removals. This message is updated dynamically through the template {{
source check}}
(last update: 5 June 2024).
Cheers.— InternetArchiveBot ( Report bug) 12:51, 11 January 2018 (UTC)
The content of this article probably has some value but I suggest to merge it into main article as a small sub-section. Bbarmadillo ( talk) 21:08, 17 March 2018 (UTC)
I have recently added a section to include current research on a new machine learning method known as Linear Poisson Modelling. [1] [2] [3] [4] [5] [6]As this method has not been widely communicated, I can understand why some would rather not include such work on the main Machine Learning page at present. However, the method is now associated with more than a dozen co-authors in application-specific areas, so I believe it is worth noting. I have tentatively place this in a new section regarding new and developing methods. Perhaps other new and developing methods could be placed there too? What criteria should be considered before inclusion?— Preceding unsigned comment added by 82.23.74.236 ( talk) 17:26, 9 June 2018
References
Shouldn't reinforcement learning be a subset of unsupervised learning?
I reorganized the Approaches section to more accurately represent the parent-child relationships of machine learning articles, as described in WP:SUMMARY style guidelines, and added text where I could by borrowing it from the lead sections of the child articles. I deleted the reference to List of machine learning algorithms as the primary main article (right under the section name) because it is not a more detailed version of the Approaches section as a whole. It is the opposite, a condensed list with no details. In a couple of other places, there were links to "main articles" that were not in fact child articles, as the label was intended for. It makes more sense to me to consider broader topics like the types of learning algorithms, the processes/techniques, and the models/frameworks used in ML to be the direct "children" of the Approaches section, so I created those headings and then sorted the text between them. I hope this makes the text easier to understand, and grasp at a higher level of understanding. Romhilde ( talk) 02:44, 25 November 2018 (UTC)
A discussion is taking place as to whether Portal:Machine learning is suitable for inclusion in Wikipedia according to Wikipedia's policies and guidelines or whether it should be deleted.
The page will be discussed at Wikipedia:Miscellany for deletion/Portal:Machine learning until a consensus is reached, and anyone is welcome to contribute to the discussion. The nomination will explain the policies and guidelines which are of concern. The discussion focuses on high-quality evidence and our policies and guidelines.
Users may edit the page during the discussion, including to improve the page to address concerns raised in the discussion. However, do not remove the deletion notice from the top of the page. North America 1000 10:36, 12 July 2019 (UTC)
I just posted this image to the article.
I liked it because
Blue Rasberry (talk) 18:43, 24 September 2019 (UTC)
The first paragraph of this section is very good, IMHO, but the last two are problematic. The second seems random and a little unfinished. The third raises an important point, but saying that statistical learning arose because "[s]ome statisticians have adopted methods from machine learning" is arguably confusing the chicken with the egg. It should also be mentioned here that statistical machine learning is a relatively well-established term (cf. e.g. this book) which has a meaning somewhere in between machine learning and statistical learning. Thomas Tvileren ( talk) 13:07, 1 November 2018 (UTC)
The new first line "Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalizable predictive patterns" is wrong and is currently under discussion on social media. — Preceding unsigned comment added by 130.239.209.241 ( talk) 06:55, 9 December 2019 (UTC)
The claim “It is seen as a subset of artificial intelligence.” is wrong. Rephrased as “Methods from machine learning are used in some types of artificial intelligence.” would be correct. In particular, it is per definition not part of wet AI unless biological material are defined as “machines”. Artificial intelligence is about creating thinking machines, not just algorithmic description of learning strategies. (It is probably “narrow AI” creeping into the article, or robotic process automation (RPA) aka “robotics” aka business process automation, which is mostly just sales pitch and has very little to do with AI.)
It seems like all kind of systems with some small part of machine learning is claimed to be AI today, and it creeps into books and articles. Machine learning is pretty far from weak AI and very far from strong AI. It is more like a necessary tool to build a house, it is not the house. Jeblad ( talk) 03:18, 28 December 2019 (UTC)
could someone possibly add some thoughts on how randomness is needed for ml? https://ai.stackexchange.com/questions/15590/is-randomness-necessary-for-ai?newreg=70448b7751cd4731b79234915d4a1248
i wish i could do it, but i lack the expertise or the time to bring this up in Wikipedia style, as it is evident by this very post and the chain of links in it, if you care enough to dig.
cheers! 😁😘 16:11, 27 February 2020 (UTC) — Preceding unsigned comment added by Cregox ( talk • contribs)
both ideas sound interesting, but they both look like optional techniques rather than necessary tools.
in my mind and from my understanding machine learning would never exist without random number generators.
as i also mentioned in my link there, i'll basically just copy and paste it here:
perhaps we're missing words here. randomness is the apparent lack of pattern or predictability in events. the more predictable something is, the dumber it becomes. of course just a bunch of random numbers doesn't make anything intelligent. it's much to the opposite: randomness is an artifact of intelligence. but if while we're reverse engineering intelligence (making ai) we can in practice see it does need rng to exist, then there's evidently something there between randomness and intelligence that's not just artifacts. could we call it chaos? rather continue there: cregox.net/random 11:12, 21 August 2020 (UTC) — Preceding unsigned comment added by Cregox ( talk • contribs)
The below text is inaccurate and none of the cited references support the statement:
Yet some practitioners, for example, Dr Daniel Hulme, who teaches AI and runs a company operating in the field, argues that machine learning and AI are separate. [1] [2] This quoted reference states that ML is part of AI: [3]
They are both commonly used terms of the English language with meanings defined by such. Any claim that there is no overlap is an extra-ordinary implausible claim. A discussion within some specialized view within some specialized topic venue is no basis for such a broad claim. Also, the insertion looks like spam to insert the person into the article. North8000 ( talk) 13:45, 29 October 2020 (UTC)
There is no reference to the below definition and it is inaccurate as a definition of Machine learning should not include AI. There are many textbook definitions of machine learning which could be used.
The current text without reference: Simple Definition: Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on the development of computer programs that can access data and use it to learn for themselves.
Definition with reference: Machine learning is the study of computer algorithms that allow computer programs to automatically improve through experience.
Machine Learning, Tom Mitchell, McGraw Hill, 1997. http://www.cs.cmu.edu/afs/cs.cmu.edu/user/mitchell/ftp/mlbook.html — Preceding unsigned comment added by Tolgaors ( talk • contribs) 09:55, 29 October 2020 (UTC)
I think that a paragraph be added about the issues that can arise from maximizing a mis-aligned objective function. As an example, just take the ethical challenges arising from recommendation algorithms in social media, [4] with negative effects such as creating distrust in traditional information channels, [5] actively spreading misinformation, [6] and creating addiction. [7] -- MountBrew ( talk) 18:37, 20 November 2020 (UTC)
![]() | This edit request by an editor with a conflict of interest has now been answered. |
Under the "Proprietary" subheading, please add a link to PolyAnalyst. Sam at Megaputer ( talk) 02:14, 17 February 2021 (UTC)
Machine learning is not only neural networks! The old lead introduction from last year was much better without all the hype.
Alpaydin2020
was invoked but never defined (see the
help page).