From Wikipedia, the free encyclopedia

For the Wikipedia article entitled “Information Privacy,” I thought that all of the information was relevant to the topic. However, certain parts of the article were certainly more clear than other parts. There were many examples for “information types,” but a lack of information for the section on “legality.” I believe that clarity would have been better for this segment, rather than simply linking a “main article” and having merely a sentence of description. Being that a wide range of examples were given, I think the article was fairly neutral. Looking at the sources quantitatively, there was certainly enough sources as there are more sources than paragraphs. As the training suggested, there should be at least one source per paragraph. Looking at the sources’ content, they seem to be published books, patents, or news (specifically BBC) reports which are generally thought to be bipartisan. The talk page was very interesting to look at, as people were explaining why they took out or modified certain parts of the article. The fact that some pointed out they were omitting “opinions” for example was nice to hear, as it means they know their Wikipedia rules decently.

For the Wikipedia article entitled “Computer Security,” the information all seemed quite relevant. However, there were some opinions, I thought. Although technology is so prominent in society, saying that computer security is “one of the major challenges of contemporary world” could be a bit of a stretch of the imagination for some. In general the article seemed well organized and complete; the only part I would’ve been more thorough with is “Types of security and privacy,” as I would have likely explained each before linking the pages. There are certainly ample sources, and many seem to be scholarly articles, news articles, or books from university libraries (all of which are reliable sources). On the talk page, people seemed very respectful of others when making edits or editing other people’s information. I would certainly appreciate this on my Talk Page later on this semester.

Both of these articles were fairly well-written. The second certainly had a wider scope, whereas the first was more precise. Both were fairly objective, with the exception of the opinion I pointed out in the second article. Overall, I hope to have pages that attract as many fellow editors as both of these have, as that will ultimately make my article even better.

Citation/Summary of One Point in an Article I researched:

About a quarter of seniors in high school apply to college, and only about half of this selective group end up graduating. [1]

I also cited some sentences with "citation needed" stated in these articles:

"China Part of Editing Privacy Law"

"Privacy Laws of the United States."

I helped the articles become more trustworthy, as still needing a citation is a sign of potential distrust with a source.

I am going to create a page on "Virtual Assistant Privacy" as I believe that this is an important topic that the public should be informed about. I plan to lay out the facts of precisely what these information these virtual assistants gather, what information ends up getting passed on to their parent companies, and how, if it all, these companies are allowed to use this information. In the process, I will decode information about privacy in the AI and technological fields as a whole, as ultimately these virtual assistants act as bridges to bringing information to other media. When one tells Siri about a reminder, Siri then brings that information to the "reminder" app; thus, virtual assistants provide a need for information on privacy modern technology as a whole. Many people in society use these technological devices blindly and without knowledge of the issues that can occur with them. Hopefully, with the facts I plan to lay out in my article, people can be more informed users who compromise their privacy less than they did before they read my article.

An outline of my article would likely include general facts about virtual assistants and AI as a whole. Then, I would connect this information to privacy and ultimately the infringement of people's privacy. This information would all be laid out as facts with an objective point of view.

Formal outline:

Lead Section Outline:

Privacy of Virtual Assistants:

Virtual Assistants are software technology that ultimately provide services to customers through various algorithms that follow the commands customers say. Well known virtual assistants include Alexa and Siri, and these assistants specifically are from their parent corporations named Amazon Incorporated and Apple Incorporated specifically. There are privacy issues concerning what information can go to these third party corporations.

There are specifically issues with regard to the lack of verification necessary for the virtual assistants to take commands. As of right now, there is only one layer of authentication which is the voice; there is not a layer that requires the owner of the virtual assistant to be present. Such privacy concerns have caused technicians to think of ways to have more verification, such as VS Button.

Rather than taking these potential infringements to heart, consumers value the convenience that virtual assistants provide.

Various patents have controlled the requirement of technology, such as Artificial Intelligence, to require Privacy by Design. This way, corporations do not have to build privacy into their designs in the future; designs can be written with privacy in mind. This would allow for a more fail-safe method to make sure that algorithms of privacy would not leave even edge cases out.

Sections I Want to Include:

-Alexa specifically: measures Amazon takes to ensure that users have privacy; measures Amazon "accidentally doesn't take (how does this infringe upon information?)

-Siri specifically:measures Apple takes to ensure that users have privacy; measures Apple “accidentally” doesn’t take (how does this infringe upon information?); how does the corporation use such stolen information?

-One layer versus multilayer authentication: VS button

-Convenience versus safety

-Privacy by design: how does this help to solve the privacy predicament?

-Artificial Intelligence and its relation to virtual assistants: How has artificial intelligence and the standards that they have for privacy shaped virtual assistant privacy?

-How the parent companies influence the virtual assistants? How does Siri versus Alexa’s privacy differ? Is this because of the parent corporations or because Alexa is not as easily accessible on the phone as Siri is?

*bolded words constitute the words that I will probably end up hyperlinking to other pages*

MY ARTICLE FIRST DRAFT

Privacy of Virtual Assistants: Information

Virtual Assistants are software technology that assist users complete various tasks. [2] Well known virtual assistants include Alexa and Siri, and these assistants are from their parent corporations named Amazon Incorporated and Apple Incorporated respectively. Other companies, such as Google and Microsoft, also have virtual assistants. There are privacy issues concerning what information can go to the third party corporations that operate virtual assistants and how this data can potentially be used. [3]

Because virtual assistants are often considered "nurturing" bodies, similar to robots or other artificial intelligence, consumers may overlook potential controversies and value their convenience more than their privacy. When forming relationships with devices, humans tend to become closer to those that perform humanly functions, which is what virtual assistants do. [4] In order to allow users both convenience and assistance, privacy by design and the Virtual Security Button propose methods in which both are possible.

One layer versus multilayer authentication

The Virtual Security button would provide a method to add multilayer authentication to devices that currently only have a single layer; these single layer authentication devices solely require a voice to be activated [5]. Multilayer authentication means that there are multiple layers of security to authorize a virtual assistant to work. This voice could be any person, not necessarily the intended human, which makes the method unreliable. The Virtual Security button would provide a second layer of authentication for devices, such as Alexa, that would be triggered by both movement and the voice combined. [6]

There are issues with the lack of verification necessary to unlock access to the virtual assistants and to give them commands. [6] Currently, there is only one layer of authentication which is the voice; there is not a layer that requires the owner of the virtual assistant to be present. Thus, with only one barrier to access all of the information virtual assistants have access to, concerns regarding the security of information exchanged are raised. Such privacy concerns have influenced the technology sector to think of ways to add more verification, such as a VS Button which would also account for motion in addition to the voice to activate the virtual assistant. [6]

Voice Authentication with Siri

The "Hey Siri" function allows the iPhone to listen through ambient sound until this phrase is spotted. Once this phrase is spotted, Siri is triggered to respond. [7] In order to not always be listened to, an iPhone user can turn off the “Hey Siri” function. This way, the device will not always be listening for those two words and other information will not be overheard in the process. [8] This voice authentication serves as a singular layer, since only the voice is used to authenticate the user.

Examples of Virtual Assistants

Amazon Alexa

This virtual assistant is linked to the " Echo" speaker created by Amazon and is primarily a device controlled by the voice that can play music, give information to the user, and perform other functions [9]. Since the device is controlled by the voice, there are no buttons involved in its usage. The device does not have a measure to determine whether or not the voice heard is actually the consumer [6]. The Virtual Security Button (VS Button) has been proposed as a potential method to add more security to this virtual assistant. [6]

The benefits of adding a VS button to Alexa

The VS button uses technology from wifi networks to sense human kinematic movements [10]. Home burglary poses a danger, as smart lock technology can be activated since there will be motion present. [11] Thus, the VS button providing a double-check method before allowing Alexa to be utilized would lessen such dangerous scenarios from occurring [12]. The introduction of the Virtual Security button would add another level of authentication, hence adding privacy to the device. [13]

Apple’s Siri

Siri is Apple Corporation's virtual assistant and is utilized on the iPhone. Siri gathers the information that users input and has the ability to utilize this data. [3] The ecosystem of the technological interface is vital in determining the amount of privacy; the ecosystem is where the information lives. Other information that can be compromised is location information if one uses the GPS feature of the iPhone. [14] Any information, such as one's location, that is given away in an exchange with a virtual assistant is stored in these ecosystems. [15]

Hey Siri

“Hey Siri” allows Siri to be voice-activated. The device continues to collect ambient sounds until it finds the words "Hey Siri." [16] This feature can be helpful for those who are visually impaired, as they can access their phone's applications through solely their voice. [17]

Siri's Level of Authentication

Apple’s Siri also has solely one level of authentication. If one has a passcode, in order to utilize various features, Siri will require the passcode to be inputted. However, consumers value convenience so passcodes are not in all devices. [6]

Cortana

Cortana, Microsoft's virtual assistant, is another voice activated virtual assistant that only requires the voice; hence, it also utilizes solely the singular form of authentication. [18] The device does not utilize the VS button previously described to have a second form of authentication present. The commands that the device utilizes mostly have to do with saying what the weather is, calling one of the user's contacts, or giving directions. All of these commands require an insight into the user's life because in the process of answering these queries, the device looks through data which is a privacy risk.

Google Assistant

Google Assistant, which was originally dubbed Google Now, is the most human-like virtual assistant. [19] The similarities between humans and this virtual assistant stem from the natural language utilized as well as the fact that this virtual assistant in particular is very knowledgable about the tasks that humans would like them to complete prior to the user's utilization of these tasks. The device practically predicts what the human will want. This prior knowledge makes the interaction much more natural. Some of these interactions specifically are called promotional commands. [20]

Automated Virtual Assistants in Ride Sharing

Ride sharing companies like Uber and Lyft utilize artificial intelligence to scale their scopes of business. In order to create adaptable prices that change with the supply and demand of rides, such companies use technological algorithms to determine "surge" or "prime time" pricing. [21] Moreover, this artificial intelligence feature helps to subside the concern of privacy that was previously taking place in companies like Uber and Lyft when employees were potentially interacting with each other and giving away confidential information. However, even the artificial intelligence utilized can "interact" with each other, so these privacy concerns for the companies are still relevant. [22]

Accessibility of Terms and Agreements

The terms of agreements that one has to approve when first getting their device is what gives corporations like Apple Corporation access to information. These agreements outline both the functions of devices, what information is private, and any other information that the company thinks is necessary to expose. [23] Even for customers that do read this information, the information is often decoded in a vague and unclear manner. The text is objectively a small font and is often considered too wordy or lengthy in scope for the average user. [24]

Privacy by design

Privacy by design makes the interface more secure for the user. Privacy by design essentially means that in a product’s blueprint, aspects of privacy are incorporated into how the object or program is created [25]. Even technology uses that have little to do with location have the ability to track one's location. For example, WiFi networks are a danger for those trying to keep their locations private. Various organizations are working toward making privacy by design more regulated so that more companies do it. [25]

If a product does not have privacy by design, then companies need to add modes of privacy to their products. The goal is for organizations to be formed to ensure that privacy by design is done using a standard; this standard would make privacy by design more reliable and trustworthy than privacy by choice. [25] The standard would have to be high enough to not allow for loopholes of information to infringe upon, and such rules may apply to virtual assistants.

Various patents have controlled the requirement of technology, such as artificial intelligence, to include various modes of privacy by nature. These proposals have included Privacy by Design, which occurs when aspects of privacy are incorporated into the blueprint of a device. [26] This way, corporations do not have to build privacy into their designs in the future; designs can be written with privacy in mind. This would allow for a more fail-safe method to make sure that algorithms of privacy would not leave even edge cases out. [25]

Artificial Intelligence

Artificial intelligence as a whole attempts to emulate human actions and provide the menial services that humans provide, but should not have to be bothered with. [27] In the process of automating these actions, various technological interfaces are formed.

The problem that has to be solved has to do with the concept that in order to process information and perform their functions, virtual assistants curate information. [27] What they do with this information and how the information can be compromised is vital to note for both the field of virtual assistants and artificial intelligence more broadly.

Controversy:

There have been controversies surrounding the opinions that virtual assistants can have. As the technology has evolved, there is potential for the virtual assistants to possess controversial positions on issues which can cause uproar. These views can be political, which can be impactful on society since virtual assistants are used so widely. [28]

Crowdsourcing is also controversial; although it allows for innovation from the users, it can perhaps act as a cop-out for companies to take credit where, in reality, the customers have created a new innovation. [29]

How close is too humanlike for virtual assistants to become?

The Wizard of Oz approach to researching human-robot interaction has been in existence. Specifically, this approach aims to have a human leader of a study fill in for a robot while the user completes a task for research purposes [30]. In addition to humans evaluating artificial intelligence and robots, the Wizard of Oz approach is being introduced. When technology becomes close to being human-like, the Wizard of Oz approach says that this technology has the ability to evaluate and augment other artificial intelligence technology. Moreover, the method also suggests that technology, in order to be utilized, does not necessarily have to be human-like. [31] Thus, in order to be utilized, as long as they have useful features, virtual assistants do not have to focus all of their innovation on becoming more human-like.

See Also

Apple Corporation

Amazon

Artificial Intelligence

Software

Privacy


References

  1. ^ "College admissions in the United States", Wikipedia, 2018-09-21, retrieved 2018-09-25
  2. ^ https://patents.google.com/patent/US9729592B2/en
  3. ^ a b Sadun, Erica; Sande, Steve (2012). Talking to Siri. Que Publishing. ISBN  9780789749734.
  4. ^ Turkle, Sherry. "A Nascent Robotics Culture: New Complicities for Companionship" (PDF). MIT.
  5. ^ Lei, Xinyu; Tu, Guan-Hua; Liu, Alex X.; Ali, Kamran; Li, Chi-Yu; Xie, Tian (2017). "The Insecurity of Home Digital Voice Assistants -- Amazon Alexa as a Case Study". arXiv: 1712.03327.
  6. ^ a b c d e f Lei, Xinyu; Tu, Guan-Hua; Liu, Alex X.; Li, Chi-Yu; Xie, Tian (2017-12-08). "The Insecurity of Home Digital Voice Assistants - Amazon Alexa as a Case Study". arXiv: 1712.03327. {{ cite journal}}: Cite journal requires |journal= ( help)
  7. ^ https://dl.acm.org/citation.cfm?id=3134052
  8. ^ https://dl.acm.org/citation.cfm?id=3134052
  9. ^ López, Gustavo; Quesada, Luis; Guerrero, Luis A. (2018). "Alexa vs. Siri vs. Cortana vs. Google Assistant: A Comparison of Speech-Based Natural User Interfaces". Advances in Human Factors and Systems Interaction. Advances in Intelligent Systems and Computing. Vol. 592. pp. 241–250. doi: 10.1007/978-3-319-60366-7_23. ISBN  978-3-319-60365-0.
  10. ^ Lei, Xinyu; Tu, Guan-Hua; Liu, Alex X.; Ali, Kamran; Li, Chi-Yu; Xie, Tian (2017). "The Insecurity of Home Digital Voice Assistants -- Amazon Alexa as a Case Study". arXiv: 1712.03327.
  11. ^ Lei, Xinyu; Tu, Guan-Hua; Liu, Alex X.; Ali, Kamran; Li, Chi-Yu; Xie, Tian (2017). "The Insecurity of Home Digital Voice Assistants -- Amazon Alexa as a Case Study". arXiv: 1712.03327.
  12. ^ Lei, Xinyu; Tu, Guan-Hua; Liu, Alex X.; Ali, Kamran; Li, Chi-Yu; Xie, Tian (2017). "The Insecurity of Home Digital Voice Assistants -- Amazon Alexa as a Case Study". arXiv: 1712.03327.
  13. ^ Lei, Xinyu; Tu, Guan-Hua; Liu, Alex X.; Ali, Kamran; Li, Chi-Yu; Xie, Tian (2017). "The Insecurity of Home Digital Voice Assistants -- Amazon Alexa as a Case Study". arXiv: 1712.03327.
  14. ^ Andrienko, Gennady. n.d. “Report from Dagstuhl: the Liberation of Mobile Location Data and Its Implications for Privacy Research.” Contents: Using the Digital Library.
  15. ^ Andrienko, Gennady; Gkoulalas-Divanis, Aris; Gruteser, Marco; Kopp, Christine; Liebig, Thomas; Rechert, Klaus (2013-07-19). "Report from Dagstuhl: the liberation of mobile location data and its implications for privacy research". ACM SIGMOBILE Mobile Computing and Communications Review. 17 (2): 7–18. doi: 10.1145/2505395.2505398. ISSN  1559-1662. S2CID  1357034.
  16. ^ https://dl.acm.org/citation.cfm?id=3134052
  17. ^ Ye, Hanlu; Malu, Meethu; Oh, Uran; Findlater, Leah; Ye, Hanlu; Malu, Meethu; Oh, Uran; Findlater, Leah (2014-04-26). Current and future mobile and wearable device use by people with visual impairments, Current and future mobile and wearable device use by people with visual impairments. ACM, ACM. pp. 3123, 3123–3132, 3132. doi: 10.1145/2556288.2557085. ISBN  9781450324731. S2CID  2787361.
  18. ^ López, Gustavo; Quesada, Luis; Guerrero, Luis A. (2018). "Alexa vs. Siri vs. Cortana vs. Google Assistant: A Comparison of Speech-Based Natural User Interfaces". Advances in Human Factors and Systems Interaction. Advances in Intelligent Systems and Computing. Vol. 592. pp. 241–250. doi: 10.1007/978-3-319-60366-7_23. ISBN  978-3-319-60365-0.
  19. ^ López, Gustavo; Quesada, Luis; Guerrero, Luis A. (2018). "Alexa vs. Siri vs. Cortana vs. Google Assistant: A Comparison of Speech-Based Natural User Interfaces". Advances in Human Factors and Systems Interaction. Advances in Intelligent Systems and Computing. Vol. 592. pp. 241–250. doi: 10.1007/978-3-319-60366-7_23. ISBN  978-3-319-60365-0.
  20. ^ López, Gustavo; Quesada, Luis; Guerrero, Luis A. (2018). "Alexa vs. Siri vs. Cortana vs. Google Assistant: A Comparison of Speech-Based Natural User Interfaces". Advances in Human Factors and Systems Interaction. Advances in Intelligent Systems and Computing. Vol. 592. pp. 241–250. doi: 10.1007/978-3-319-60366-7_23. ISBN  978-3-319-60365-0.
  21. ^ https://www.competitionpolicyinternational.com/wp-content/uploads/2017/05/CPI-Ballard-Naik.pdf
  22. ^ https://www.competitionpolicyinternational.com/wp-content/uploads/2017/05/CPI-Ballard-Naik.pdf
  23. ^ https://heinonline.org/HOL/Page?handle=hein.journals/jmjcila27&div=23&g_sent=1&casa_token=&collection=journals
  24. ^ https://heinonline.org/HOL/Page?handle=hein.journals/jmjcila27&div=23&g_sent=1&casa_token=&collection=journals
  25. ^ a b c d Cavoukian, Ann; Bansal, Nilesh; Koudas, Nick. "Building Privacy into Mobile Location Analytics (MLA) Through Privacy by Design" (PDF). Privacy by Design. FTC.
  26. ^ Personal virtual assistant (patent), retrieved 2018-10-30 {{ citation}}: Cite has empty unknown parameter: |issue-date= ( help)
  27. ^ a b 1940-, McCorduck, Pamela (2004). Machines who think : a personal inquiry into the history and prospects of artificial intelligence (25th anniversary update ed.). Natick, Mass.: A.K. Peters. ISBN  1568812051. OCLC  52197627. {{ cite book}}: |last= has numeric name ( help)CS1 maint: multiple names: authors list ( link)
  28. ^ https://static1.squarespace.com/static/53853b6ae4b0069295681283/t/5abd99136d2a739c3f5f0ca5/1522374932029/politics_virtual_assistants.pdf
  29. ^ https://static1.squarespace.com/static/53853b6ae4b0069295681283/t/5abd99136d2a739c3f5f0ca5/1522374932029/politics_virtual_assistants.pdf
  30. ^ http://delivery.acm.org/10.1145/1520000/1514115/p101-steinfeld.pdf?ip=136.152.143.155&id=1514115&acc=ACTIVE%20SERVICE&key=CA367851C7E3CE77%2E3158474DDFAA3F10%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&__acm__=1541007256_445f86dd397d548e295e0d927a776f19
  31. ^ http://delivery.acm.org/10.1145/1520000/1514115/p101-steinfeld.pdf?ip=136.152.143.155&id=1514115&acc=ACTIVE%20SERVICE&key=CA367851C7E3CE77%2E3158474DDFAA3F10%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&__acm__=1541007256_445f86dd397d548e295e0d927a776f19

Peer reviews and sourcing Information

Tommytheprius Week 9 Peer Review

Some technical things that stood out:

  • You have a lot of terms that look like hyperlinks but are in red. This means that you have tried to insert a hyperlink to a page that doesn't exist, so I'd suggest just taking away the attempted link. You can do this by just clicking on the word(s) and clicking on the red circle with the line through it.
  • Generally, you should use a hyperlink for a key term only the first time you use it. I've noticed some inconsistencies where you either use a hyperlink but not on the term's first use or you repeatedly hyperlink the same terms, such as artificial intelligence.
  • The See Also section seemed like it looked a little weird to me, so I looked at what other pages did and they usually put bullets. I'd recommend using bullets to make it look like more of a list.
  • I think you might want to be more consistent with your citations. You alternate between adding them before the period, right after the period, with a space after the period, and sometimes there is no space between the end of the citation and the start of the next sentence. It just feels like it would flow better if they were all right after the period, which I think is the standard way to add citations.
  • The colons after the "Controversy" and "Privacy of Virtual Assistants" titles seem unnecessary.
  • Is "Virtual Assistants" a proper noun? If not, you could take away the capitalization of the first letters.
  • I believe that the wikipedia trainings say that headings and subheadings should have the first letter of the first word capitalized and the rest of the words in lowercase, unless there are proper nouns, so you might want to adjust some of the headings accordingly. (Ex. Artificial Intelligence, Siri's Level of Authentication, Voice Authentication with Siri)
  • In the reference section, a lot of the links look like repeats. I think you could use the "reuse" option more when adding citations so that citations from the same source accumulate under one number. Also, I'm not completely sure about this, but you may want to put in the ASA citations and not just links for each source.

Comments on content:

  • Lead section:
    • The sentence "Well known virtual assistants include Alexa and Siri, and these assistants are from their parent corporations named Amazon Incorporated and Apple Incorporated respectively." seems a little bulky. A rephrase could be "Well known virtual assistants include Alexa, made by Amazon, and Siri, produced by Apple."
    • Consider adding a citation after this sentence: In order to allow users both convenience and assistance, privacy by design and the Virtual Security Button propose methods in which both are possible.
    • Overall, your lead section is very concise and does a great job of providing and overview for the article!
  • One layer versus multilayer identification:
    • You might want to add a concise definition of "Virtual Security button" because you often reference what it would provide, but I'm not clear on exactly what it is. Also, at one point you use "VS button" without first putting that in parentheses. You could just put VS button in parentheses after the first use and use the abbreviation from that point forward.
    • I'm sort of confused about the concept of multilayer identification. I assume the meaning is that beyond authenticating a single voice (single layer identification), there is some sort of other layer, but I think you could improve this section by adding an example of what another layer could be.
    • When you say that the voice is the only layer of identification, is that referring to any voice or a specific owner's voice? I know that Hey Siri is calibrated to only respond to the voice of the iPhone's actual owner, but I feel like that point isn't completely clear.
  • Examples of virtual assistants:
    • The sentences "The Virtual Security Button (VS Button) has been proposed as a potential method to add more security to this virtual assistant. [1]" and "The introduction of the Virtual Security button would add another level of authentication, hence adding privacy to the device. [2]" seem very similar and maybe redundant, so you could consider cutting one of them.
    • After this sentence: "Siri is Apple Corporation's virtual assistant and is utilized on the iPhone. Siri gathers the information that users input and has the ability to utilize this data. [3]" I think you should add examples of how Siri utilizes this data or to what ends.
    • Some of the information in the "Siri's Level of Authentication" section is very similar to some of the information in the "Voice Authentication with Siri" section. It seems like you might want to consolidate these into just the "Siri's level of Authentication" section because it feels like it belongs with the other examples, not with the single vs multilayer authentication discussion.
    • When you claim that there is a privacy risk in the sentence "All of these commands require an insight into the user's life because in the process of answering these queries, the device looks through data which is a privacy risk." you could add a citation to ensure the reader knows that this is not your subjective thinking.
    • When you say "The device practically predicts what the human will want." it seems pretty subjective, so I'd add a citation.
    • The sentence "Moreover, this artificial intelligence feature helps to subside the concern of privacy that was previously taking place in companies like Uber and Lyft when employees were potentially interacting with each other and giving away confidential information." seems convoluted. You could rephrase by saying something like "Moreover, this artificial intelligence feature helps to allay privacy concerns regarding the potential exchange of confidential user information between Uber and Lyft employees."
  • Accessibility of terms and agreements:
    • The title uses "terms and agreements" but the first sentence says "terms of agreements" - which one is right?
    • You may want to include something about the legal implications of a user agreeing to terms if you have anything related to that in your articles. If the user accepts these terms, are they signing away legal rights?
  • Privacy by design:
    • "If a product does not have privacy by design, then companies need to add modes of privacy to their products." This sentence could appear subjective. Consider adding a citation or rephrasing to not include the word "need". A rephrase could be something like "If a product does not have privacy by design, the producer might consider adding modes of privacy to the product."
  • Artificial intelligence
    • You could add examples of the interfaces mentioned in this sentence: "In the process of automating these actions, various technological interfaces are formed."
  • Controversy:
    • "The Wizard of Oz approach to researching human-robot interaction has been in existence." This sentence sounds a little abrupt." You could rephrase to say something like "One way to research human-robot interaction is called the Wizard of Oz approach." (also I changed this hyperlink to direct to the experiment and not a list of different uses for the term Wizard of Oz)

Overall, I think you've got a great article here! The structure was logical, it had a good tone, and it was a very interesting read. Good job :)

First Draft Peer Review from Midwestmich9

    • I thought your lead section gave a good overview of a scope of your topic and the ideas you wanted to include. I really appreciated how all the sections in your article were mentioned in the lead, because this allows the reader to anticipate what the article is about. There were some sentences that I think you have to make sure you have good sources for, because they seem a little biased. For example the sentence, “Rather than taking these potential infringements to heart, consumers value the convenience that virtual assistants provide” sounds biased because it seems like you’re assuming the attitude of the consumer.  
    • I thought you did a really great job of analyzing the technical aspects of the technologies and identifying how they could be improved to prevent privacy breaches. To improve on the Amazon Alexa section, I suggest adding a sentence or two that describes what Alexa is and giving the reader a little more background before talking about the issues with it. The same could be done for the Siri Section as well. This helps readers who don't have exposure to this technology to fully understand your article. Overall, I thought these sections were well written and I thought it was great how you explained how they related to one another.
    • All the information in the article was well balanced. More information could be added to the convenience versus safety section to make it more balanced to the other sections, but overall the structure is balanced!

Funfettiqueen Peer Review

I think that your lead section is really clear. You provide a great definition of virtual assistants which sets the scene for the rest of the article and gives a clear understanding to readers. In the lead section, though, it seems as though you provide clear concerns regarding privacy. Rather than including this in the lead section, I think it could be helpful to have a section titled "Privacy concerns with virtual assistants". It seems that the points you bring up are rather skeptical about privacy and virtual assistants (which is probably the dominant viewpoint, too), so I think it is important to distinguish that these are concerns. Additionally, I think it could be really interesting to provide a section about the legality of privacy and these virtual assistants, since it seems like the majority of the population finds them pretty concerning. For the different virtual assistants that you detail, I think it would be helpful to provide information about the different technologies and their functionalities. Also, I think you can hyperlink them to existing Wiki pages, but I did not check if those exist. In the section of One layer vs. multilayer, I was a tad confused why you kept using the future tense of "would". Is it not implemented? I was also a little confused when you went into PbD, so it could be cool to have a section called "solutions to privacy concerns" or something like that!

References

Abdelhamid, Mohamed, Srikanth Venkatesan, Joana Gaia, and Raj Sharman. 2018. “Do Privacy Concerns Affect Information Seeking via Smartphones?” IGI Global. In this article, it is eminent that technology has made many tasks that used to be menial much more automated. Siri and Alexa, as well as any other virtual assistants, are user-friendly options that make menial tasks such as using search engines more automated. However, such usages come with a price privacy-wise. The ease of seeking information makes even the most uninterested users use search engines more to search for facts. Thus, a more diverse group of people are exposed to technology through algorithms such as virtual assistants. These people, however, do not consider the increased privacy risk that they are at from using the services with this technology rather than with the original method. As briefly stated in some of my other articles, having virtual assistants as an intermediary creates another barrier for private information to be able to get through. This research will have to do with my article since I am writing on precisely these intermediaries. This article will add specific information about the pros and cons of these intermediaries and whether or not they outweigh using the traditional methods enough to risk the privacy. Obviously, many users will not even consider the privacy aspect, but for those that will, this is a good article to reference. Hence, these interested readers, as well as technological critics, would be the intended audience for this piece. The language can be somewhat tedious to understand, but any interested reader would be able to fully understand the piece. This source is reliable because it is peer reviewed and has very reliable citations as well; there is not a biased present as both pros and cons are produced and addressed throughout the piece. The author does not seem to weigh one side more heavily than the other either, which is vital for the article being objectively interpreted. Andrienko, Gennady. n.d. “Report from Dagstuhl: the Liberation of Mobile Location Data and Its Implications for Privacy Research.” Contents: Using the Digital Library. Through this article, a reader will better understand the potential privacy infringements that are possible through the utilization of smartphones as a whole. Although this article talks about data for the phone as a whole more-so than for the virtual assistants in particular, virtual assistants, generally, end up being another medium toward using the same features present on the phone itself. The article discusses the methods as to how the data that smartphones collect can be misleading and lead a company to the wrong conclusion about users’ lives. Data is practically an “ecosystem” that the analysts have to sort through, and sometimes with little to no contextualization. In addition to the concept of GPS privacy infringement, there is also an infringement potentially on people’s phone calls. The article even directly addresses Siri, Apple’s virtual assistant, by saying that when someone tells Siri to text a message to someone else, Siri now has that information. The article states that there should be a mode of education for the standard user in which they can understand the data ecosystem and how their data is being used. The virtual assistant more-so serves as a medium through which these are features accessible. Hence, this article’s discussion is very prominent to my research. The discussion of the GPS specifically is a vital one to note; when someone says “Hey Siri, take me to the nearest Starbucks,” for example, Siri knows where said-person is. This raises important questions, such as is the default setting of the phone for Siri to know where the user is at all times? Also, what exactly can Apple access from this information, and how does it invade the privacy of the user. In my opinion, this article should be accessible to the public; the issues discussed should be common knowledge for all smartphone users. Lastly, the source is very reliable since it was peer-reviewed by many and was affiliated with a reliable company, IBM. Backer, Larry Catá. 2013. “Transnational Corporations' Outward Expression of Inward Self-Constitution: The Enforcement of Human Rights by Apple, Inc.” The Mutual Dependency of Force and Law in American Foreign Policy on JSTOR. Globalization and societal norms as a whole have been affected by the “constitutions” of mega corporations which has thus shaped societal constitutions. These constitutions serve as protocol that form how legitimate services are run. Constitutions have been less effective over time. The relationship between the constitutions of society and corporations has been shaped by technology. Prior to technological advances, constitutions were unable to interact with one another and intermingle their identity. However, now they overlap which causes citizens to both become multicultural but less connected to their own culture at the same time. Technology mixes worlds like never before, and the world is still adjusting to the issues that this intertwinement can cause. Equilibrium is a vital part of the balance between culture and technology and how this can be thrown off. This source is very useful for my writing as the heart of my article lies with the main virtual assistants such as Siri and Alexa, and this article uncovers the fact that cultural implications of this technology emerge unless if a societal constitution seeks equilibrium. Because culture impacts the public so much, some information about the topic of these main virtual assistants is vital to include into my Wikipedia article. This source will be a good reference point, and will be a great place to jumpstart my investigation into the privacy with the big corporations specifically. The title including antonyms such as outward and inward shows the truly complex nature that technology has with human rights and the field of privacy. Larry Catá Backer, a legal expert, has the ability to paint a clear picture of the human rights that people compromise to technology and how these rights are given away with the use of technology without people knowing it. The credibility of the source is eminent, with Backer as the author and with JStor as the database. The article was peer reviewed, and contains information that I would like to use in my article; thus, this article will be an instrumental piece of research. The audience of this article would likely be law students, perhaps, since the author of the piece is a legal expert. Moreover, the article is written in terms that makes it accessible to the common reader. Brin, David. 1999. “The Transparent Society.” Harvard Journal of Law and Technology. This piece outlines the foreshadowing of technology making everyone’s lives transparent in the near future either gradually or with a major leak. The main question of the piece is that as technology overcomes society as a whole, will those who do not want to risk their information being infringed upon have to not engage in technology? The piece explores the dominance of various corporations and the government in both the management of information and the introduction of devices that control this management. There are few companies that control the bulk of this information, which makes the entire process less fail safe. The piece looks at specific technology and in the process argues that consumers should not have to choose between liberty and being technologically updated. “Reciprocal technology” specifically is proposed, which reminds me of the privacy by design concept present in my other research. This information is useful for my research as these virtual assistants are becoming more and more dominant on the daily, and two main companies control the fate of the actions of the bulk of virtual assistants. This article does seem to have bias, however, as in the introduction specifically, certain words such as “argue” are used which implies that some of the information may be opinionated. However, this source is from the Harvard law library so it is quite a reliable source. When going through information from the source, I will make sure to solely pick out the bipartisan viewpoints and ensure that no opinionated facts make it into my article. Even though this article is not perfect because of the slight amount of bias it contains, I still think the information is useful enough and provides an alternative to privacy by design, so I would like to use this research in the manifestation of my Wikipedia article. Cavoukian, Ann. 2014. “ Building Privacy into Mobile Location Analytics (MLA) Through Privacy by Design.” Aisle Labs. This article discusses the dangers that a lack of privacy in location tracking technology can cause. If a company can gather enough information about a person, they can perhaps figure out other facets of a person's life or livelihood. The goal is to gain a "win win relationship" between the consumer and producer by creating "privacy by design." This terminology essentially means that in a product’s blueprint, aspects of privacy are incorporated into how the object or program is created. Even technology uses that have little to do with location have the ability to track one's location. For example, wifi networks are a danger for those trying to keep their locations private. Various organizations are working toward making privacy by design more regulated so that more companies do it. This article is vital for my article since I need to find the relationship between privacy by design and virtual assistants. I will compare sources between this article, ones specifically on virtual assistants, and ones for technology privacy in general. This is an interesting piece because it looks at privacy in technology from the producer's point of view. Rather than looking at the output that a product has with regards to privacy, this article looks at the making of a product's infrastructure and how privacy can be incorporated. If a product does not have privacy by design, then companies need to add modes of privacy to their products. The goal is for organizations to be formed, as described in the article, to ensure that privacy by design is done by standard; this standard would make privacy by design, hopefully, more reliable and trustworthy than privacy by choice. The standard would have to be high enough to not give loopholes of information to infringe upon, and such rules would have to apply to virtual assistants in order to help out my article's subject. This article is written by Cavoukian, a PhD and has also been peer reviewed and edited from two other scholars. Thus, this article is very trustworthy and will be quite useful in my argument. The audience of this piece would likely be those interested in the privacy of common products they use or people who are perhaps planning on creating a product that they wish to be trustworthy themselves. Cooper, Robert. 2004. “Personal Virtual Assistant.” United States Patent 27–65. This source is a patent about personal virtual assistants, which describes how the behavior of the technology is affected how users input information. A remote computer is the technology that ultimately makes the device conform to its algorithms. Various words and techniques of the voice to convey emotion also affect the response from the virtual assistant to whatever information is thrown at it. Many professionals have actual human assistants to help them complete menial labor that is not worth their time, such as cold calling. Virtual assistants provide this service to the wider market. The system of “varying voice menus” and “voice response systems” are patented in this document. The ability to adapt behavior is one that is innovative for technology and is unable to be duplicated in the precise method that this patent lays out. The “Voice Activation,” or VA system, provides a method for which how the device is triggered. Connecting this to my other sources, there is a source for Siri that uses the “Hey Siri” feature. This feature, if activated, essentially permits the device to constantly be listening for those words. In the process, as discussed in other articles, information that does not concern those two words can be infringed upon. Ultimately, this article proves that the technology concerning virtual assistants is proprietary and should not be duplicated. This fits into my article’s scope because the laws concerning patents and privacy intertwined ultimately influence how privacy by design can be upheld. Without legal boundaries, some companies may not bother to have privacy by design. The patent responds to laws, as well as customer demand, and shapes how the device’s algorithms manage aspects of their operation, such as privacy; the way that these algorithms differ is what makes the patent feasible and thus proprietary. Since this is a legal document, there is no bias associated, making it a reliable source. The patent is on Google, which is the holder of a virtual assistant itself, so the source is a reliable one. The intended audience is likely a technical audience, since the patent uses legal terminology. Perhaps a lawyer for the tech companies would be very interested to figure out what can be duplicated and what is proprietary. This article will be instrumental for my research since I should include the case law surrounding the article so that it covers all of the topics surrounding the privacy of virtual assistants possible. Damopoulos, Dimitrios. 2013. “User Privacy and Modern Mobile Services: Are They on the Same Path?” Contents: Using the Digital Library. Although virtual assistants are primarily for the primary user of the device being utilized, the receiving user is an important facet of the exchange as well. Many of the functions of virtual assistants have to do with calling or texting someone else, as in another user. Thus, not only the user’s privacy is at stake but also this receiving user as well. Many of the articles overlooked this fact, which is why delving deeply into this article seemed so attractive. Some malware testing is used in a study in this article to ultimately test out the privacy from both sides. The study results are shown in this article, along with the technological features that contribute to how much privacy there is or lack thereof. In all of my other articles, the privacy was solely discussed from the point of view of the owner of the device using the virtual assistant; this article has a different point of view and analyzes the other side which is useful for an objective Wikipedia article. Thus, I now have information to include about both parties of an exchange with the usage of virtual assistants. This source’s author has many affiliated authors and reliable sources cited throughout, so the article is reliable from that standpoint. Moreover, the audience of this article would have to be technical as various aspects of code and other technology is described that the average user may not be able to fully understand. Computer science jargon is used that many, including myself, do not fully understand without using other resources. However, some of my other articles lack this technological knowledge, so if I utilize some of these terms and then hyperlink them to their respective pages, my article may seem more well-informed than it would have been without my researching this article. Haake, Magnus. 2008. “Visual Stereotypes and Virtual Pedagogical Agents.” JStor. This article discusses the use of virtual devices in traditional mediums such as newspapers. The article outlines the pros and cons of intervening in these traditionally not technological mediums. Although technology allows for new features to be added to these traditional mediums, for mediums such as newspapers, the traditional aspect of using the page is lost. Specifically, virtual pedagogical agents are evaluated and the pros and cons of their introduction to traditional mediums is discussed. The virtual assistants are replacements for functions that used to be manual labor; thus, an element of personability is lost. This gap is what technology tries to fill with the personalization aspects discussed in some of my other articles. Apple tries to make technology personalizable to aspects of one’s life such as culture with the various accents. Also, Apple has made Siri have an amicable front with the ability of Siri to call the user by their preferred names. Hence, this article connects to my research since it discusses the virtualization of facets of labor or items that used to be manual or tangible respectively. The source is reliable since it is a peer-reviewed source found on JStor. The article is not biased, as it lays out all of the facts in an objective manner. The source does not seem to sway in either direction of wanting technology to intervene. The intended audience would really be anyone who is interested, as the language is not particularly scholarly, so the average audience would understand. This article will be relevant for my research since privacy is one of the aspects that determines whether or not this technological intervention is necessary or detrimental. I can create a portion of my article or add to one of the existing portions and outline the pros and cons of technological intervention specifically for the fields that virtual assistants address. Lei, Xinyu. 2017. “The Insecurity of Home Digital Voice Assistants – Amazon Alexa as a Case Study.” This article is a case study on home digital assistants and how they ultimately are insecure. The case study specifically focuses on the solely single authentication method that allows the devices to take commands from any voice even when the owner is not around. The article implies that there should be a more secure method to know whether or not the owner is around so that voice commands are more secure. Even with potential infringements, virtual assistants are growing in scope. People care about convenience, so they choose to be ignorant about these issues. Main issues occur when the owner is not present with the virtual assistant, because then the algorithms of the technology is all that separates infringement from occuring. The virtual assistants, then often transfer information to their respective corporations such as Apple and Amazon. A “VS Button” is ultimately the second factor of authentication that would make virtual assistants more secure and less easy to infringe upon. This article is reliable since it is from Michigan State University, a valid institution, and is peer edited. Ultimately, the article is made for scholars as the terminology is quite technical and the average user may not understand. The potential bias present seems to be the fact that the article does not really provide that much of a counterclaim as to the potential fact that virtual assistants may be more trustworthy than the article is letting on and that the author may be paranoid. This article will be helpful for my research since I am investigating the privacy of virtual assistants, and this article provides a multitude of reasons for the lack of privacy with the technology. There are many technical terms that I could use that could potentially become hyperlinks and other articles in the future. This article will be necessary for my research, specifically, since this will provide me with the technical terms that some of my other articles may lack. Being that I want my article to be informative, these technical terms should be an important part of the points I make. Mariani , Joseph. 2014. “Natural Interaction with Knowbots, Robots, and Smartphones.” Springer. This book, in chapters 3 and 4, describes a study where the dialog systems are used by controlled study participants. However, as seen with the results, their use is not necessarily indicative of the everyday user. Because a lot of data is involved now, studies are needed with unknowing human subjects. If personal data is involved in a study, users are less likely to be honest with their usage since their personal information, if they give it, may be compromised. However, if the studies do not include gathering data or if the data is erased right away, then the subjects are more likely to be honest with their usage. Maintaining the privacy of the subjects is very necessary and the chapter lays out how to do so. Maintenance of the system is also necessary, to maintain the privacy by design. As seen in chapter 4, speech based interaction is vital to look at; for my research with virtual assistants, their usage revolves around speech based interactions. Multiparty settings, if applicable make the system even more complex. This information can be useful for my article because it specifically talks about the technology of voice applications in general, which can be seen in more devices than virtual assistants alone. However, since the technology still applies to virtual assistants, it can still be applied to my article. This book provides me with new knowledge on the technology which will be useful. Moreover, this book is trustworthy as it was curated on Google Scholar and is peer-reviewed. The audience would likely predominantly be a computer science scholar since he or she would understand the code models or graphs that are shown in this piece. However, even though some of the information went over my head, I still think this article will be useful for my research. McCorduck, Pamela. 2009. “Machines Who Think | A Personal Inquiry into the History and Prospects of Artificial Intelligence.” Taylor & Francis. This annotation is specifically on the thirteenth chapter of the book in Part V “Tensions of Choice.” This chapter discusses the implications of Artificial Intelligence and if it is moral. Because privacy has to do with the morality of the infringement of information, this article is applicable for my research. Because this chapter is in the latter part of the book, this chapter has the foundation of the fact that artificial intelligence can be created. Thus, this chapter is not questioning the possibility of artificial intelligence but rather the morality of it which plays a factor in the privacy of AI as a whole. This chapter questions if it should be allowed for artificial intelligence to be taken to as far of a level as it will inevitably get. The chapter goes through several arguments, some on the technical side in saying that artificial intelligence is helpful for society so it should be considered an acceptable invention with others saying that the morality is simply not okay. For the moral arguments, the piece draws comparisons with the concept of various activities being alright to think about but not to actually follow through with them; this stance is taken for AI as a whole. The autonomy of AI is both a helpful and scary thing. This chapter is useful for my research because many of the reasons that the chapter outlines for the issues with AI have to do with how intrusive the technology can be. Thus, the chapter will be useful for the writing of my article. I can include a section about the issues with morality and how these intrude upon users’ privacy. This piece is also very trustworthy since it utilizes many experts, including technology scholars at the top of their field at the Massachusetts Institute of Technology. This article would mostly be useful for a technical audience, or at least readers who are familiar with some technical or AI terms in general. Hence, this useful and legitimate source will help me in my research and has influenced a potential sub-section and hyperlink of the morality of privacy that I would like to add to my article. Miner, Adam S. 2016. “Smartphones and Questions About Mental Health, Interpersonal Violence, and Physical Health.” JAMA. This article has to do with the manner in which Virtual Assistants respond to issues about mental health or potential suicide risks. The way that these virtual assistants gather this information is by someone’s command to them. Sometimes, if smartphones are the only resource that people with mental issues have, it is vital for these smart devices to have responses either aiding the scenario or sending the human to the right place to look for help. This article discusses the various responses and the need for continuity and completeness of the answers that virtual assistants provide. This article’s process to go about looking at this continuity and completeness was a trial in which they tested the questions. Each virtual assistant, then, was ranked depending on their issues with the results curated from this trial. This information would be an interesting addition to my article for a multitude of reasons. For one, there has to be privacy associated with this issue. If students tell a teacher in school about a mental health issue and if it is serious enough, teachers are mandated reporters. How is the privacy with Siri any similar or different? Another important topic to question is whether or not Apple gets such data and what they can do with it. The audience focused on this article should be tech companies perhaps to see how they compare to other virtual assistants, but also humans in general to support the devices that care more about human health. This article was written with mental health specialists and technologists together, so the information is quite valid. Hence, this article provides an alternative aspect of virtual assistants to look at. When audiences and society think about virtual assistants, the way that they respond to mental health risks is not of utmost notoriety normally; however, as seen in this article, this is a vital thing to note. Najaflou, Yashar. 2013. “Safety Challenges and Solutions in Mobile Social Networks.” Arxviv. All apps on the technology that virtual assistants are used on, even Facebook, have the potential to be infringed upon for the purpose of the data of the user being compromised. For example, if one were to say to Siri “post that picture with my mom on Facebook,” which I tried while reading this article, Siri will respond by saying that she has to access your Facebook data to do so. For the average user, he or she may not understand the implications of compromising this data or the fact that there is potential for the data to be used for more than merely posting a photo. As the article discusses, MSN or mobile social networks, provides an environment for data sharing. However, there are many challenges both logistic wise and privacy wise which this article discusses. There are trust and security issues with MSN, which for the informed user, can raise red flags potentially leading to them not using the services. Because my article has to do with the privacy of various data that can be stored with virtual assistants, this article is very relevant to my research; the virtual assistants specifically are tied to the applications discussed in this article. Virtual assistants make this information even more faulty privacy wise, however, since information now goes through two mediums rather than just one. This article is fairly trustworthy as it is worded objectively and is written from a nonpartisan standpoint. The audience would have to someone familiar with the jargon of MSNs, or else further research would be required. Thus, people with technological backgrounds are more likely to fully understand this article than the average user. However, the average user should know about this information since the main topic discussed is applicable to the average user; ultimately the user themselves are going to be the ones who decide whether or not to take privacy concerns seriously or not. Quast, Holder. 2013. “US9571645B2 - Systems and Methods for Providing a Virtual Assistant.” Google Patents. Since the law terminology is vital in determining how much privacy is necessary versus how much is simply ethically moral by virtual assistant parent companies, looking at the case law is vital. This patent specifically has to do with the act of a virtual assistant calling someone; an example of the command to warrant this would be “Hey Siri, call Denis.” In addition to this command having to do with this article, it also has to do with the “Hey Siri” potential privacy infringements described in my other articles. This article specifically talks about the interface behind virtual assistants and precisely what parts contribute to the act of the virtual assistants performing a call. The embodiments for the precise protocol is described; in the process, the act of where the information is stored and how it can be utilized is laid out. Also, through asking the virtual assistant to call someone else, in the process, the virtual assistant also looks at all of the contact information; how much can be curated out of that is described in the article as well. This article will be useful for my research because the call function, is ultimately, the intended purpose for a phone. Hence, the case law surrounding the ways in which virtual assistants can affect this main usage of a phone is vital to note for my research. The audience of this patent should be anyone who would like to know the technical terms and expectations for virtual assistants. This article is very reliable as well since it is a patent that has been legally reviewed. Nuance Communications Incorporated endorsed this patent, which would potentially cause a bias. However, the company is not one of the main ones that I will bring up in my research, as Wikipedia is for the common user. Because of this, I would like to more-so bring up companies and virtual assistants that the common user would recognize, such as “Amazon Alexa” and Apple’s “Siri.” Hence, this reliable source will be useful in the technical and legal aspects of my research. Rao, Ashwini. n.d. “Expecting the Unexpected: Understanding Mismatched Privacy Expectations Online.” Usenix. Because users are ultimately the ones who are affected by privacy infringements, their expectations of privacy are the ones that truly matter. Because the public does not know the implications of privacy infringement, they often do not bother to go through the dense policies that they have to agree to before using a device. There needs to be a way to match what consumers expect and what companies give back to them privacy wise. When users first think of a website or internet service, if a website has properly done its job, they think of the intended purpose of the service; users often overlook the infringements that come along with the service. This article specifically surveyed users to try to estimate what user expectations are when using an online service. Various characteristics of the websites are what often make someone have a view of whether or not privacy by design was valued. Mismatches between results and expectations were covered as well. This article is useful for the audience of readers that would like to find out what the public values in online infrastructure. This article will be useful for my research since it is necessary to know what the public expects and what their standards are for virtual assistants’ privacy; this concept is what will decidedly shape the potential innovations in privacy disclosures that may be necessary in the future. If my article on Wikipedia contains the background of what people expect, such information will be visible to a wider audience which is a necessary stepping stone for innovations to be made. This article was made in conjunction with professors from Carnegie Mellon University as well as tech professionals, so it is a reliable source to be used that will help sum up the relevance of my article for the daily users of technology. Sadun, Erica. 2013. “Talking to Siri.” Google Books. I specifically read the first chapter entitled “Getting Started with Siri.” By using Siri, people send information to Apple, such as who their contacts are, music taste, and fitness statistics. Apple has the ability to send this information. The privacy policy is shown and the book states how users are “encouraged” to read this long policy prior to using Siri. This book is more-so from the point of view of trying to show users what the device has to offer; thus, although the book does touch on the fact that Siri does give Apple a lot of personal information that can potentially be used by third-party companies, the book encourages the user to utilize all of Siri’s functions that ultimately infringe upon privacy nonetheless. The rest of the book other than this chapter is essentially a guide to using Siri; in the process, the implications that are possible are briefly discussed. This chapter is useful for my article since it discusses how infomration is sent to third parties; now I can have a source to cite this information. I knew that this fact was the case, but before I had no evidence. Thus, even though this book does have a bias of encouraging the user to use Siri, it is still useful for my research. The authors are very reliable, as they are known authors in this field. Also, the book was curated on Google Scholar making the source reliable. Ultimately, the book’s audience is an Apple Siri user, as the book does outline the ways in which the device can and should be used. Thus, even though this source is potentially biased, its evidence is still very useful for my article. I plan to use this book to cite information that I needed a source for, and this concept is one of the main points of my article, so this book helps me work toward the goal of creating my article.

TELTZROW, M. AX IM IL IA N. 2004. “IMPACTS OF USER PRIVACY PREFERENCES ON PERSONALIZED SYSTEMS.” Springer. This article discusses how modern technological algorithms have made preferences available that were unavailable prior to technological advances; such advances are especially utilized in an educational sector where every person learns differently. The article discusses the need for a balance to be created between privacy and personalization, as personalization reveals a significant amount of information about a person. Various data is inputted with personalization, and such information privacy can be more easily compromised than a secure set of data that cannot be touched. Virtual assistants are some of the “user adaptive” systems, even though the article does not address them directly since they emerged after the article was created in 2014. Users have the ability to tell Siri to talk in a certain accent or call them a particular nickname, which are all actions that essentially make the code personalizable. Thus, this article connects to what I would potentially want to put into my Wikipedia article; a good additional section that I failed to think of prior to reading this article would be the privacy of the aspects in particular that are customizable, and if these facets are easier infringed upon than the set in stone concepts. This article is peer-reviewed and from a reliable publisher, so it is a reliable source; as I outlined above, it is a useful source for my findings. There does not seem to be a potential bias, as the information is fairly objective and statistical. The audience is likely a technology expert who would be able to understand the terminology used. Overall, this article is an interesting source, as it showed that virtual assistants were derived from something prior to their existence; specifically this article discusses the privacy of personalizable technology, which is precisely what the virtual assistants are as they are objects that are made to help a user personally. Turkle, Sherry. 2006. “New Complicities for Companionship.” MIT. This article talks about the psychology of robots with society; although virtual assistants are not physical robots that walk around, they hold human roles, as they have names like Siri and Alexa. Hence, even though this article is not about virtual assistants specifically, the information is very valid for my research as a lot of it revolves around the psychology, or perception of privacy, that humans have. This article discusses how humans are allowing robots into their everyday routines. Robots do things for the consumer, and do them in an uncanny human fashion. In fact, by completing tasks for us, robots “nurture” the customer. Generally, people like what nurtures them; the article even goes to the length of comparing the comforting feel of a doll with the guardianship that robots provide for the consumer. The concept of robots, as well as virtual assistants, is to as closely mirror human traits as well as possible. The article also discusses how humans in need, such as young children or senior citizens, would get more attached than a self-functioning adult. Because feeling comfortable with a device implies trust, the nurturing relationship that virtual assistants provide can be an issue for people concerned with privacy. Similarly to a friend telling another friend their secrets, if one trusts robots or virtual assistants, one may overshare information. Hence, this article is good for my research because it gives me good psychological information that my other articles lack about why, other than convenience, humans expose themselves to privacy infringement. The juxtaposition of psychological and technological evidence will be good in adding credibility to scholars of all backgrounds to my article. This article would be relevant to audiences from both psychological and technical backgrounds; however, it is fairly technical. This article is quite reliable as it is from a scholar from MIT who has background in sociology and psychology; since she is from a technical school, she also possessed the technical information and resources necessary to write a paper on robots. Hence, this article is useful for my research and is a reliable source to look at. West, Darrell. 2018. “The Future of Work: Robots, AI, and Automation.” This annotation is on the second chapter of the book, and is called “Artificial Intelligence.” This portion describes the automation of tasks that led to the arise of virtual assistants and how this automation affects daily life greatly. Virtual assistants take jobs from humans, but are very helpful and convenient at the same time. Big political predicaments occur due to artificial intelligence as a whole. The social impact of robots and AI is very important to society and the government as a whole. With AI taking jobs from people on the daily, there is an imposing economic threat that is bound to erupt if technology keeps going in the same direction. Throughout the process of AI, information is gathered. Unlike the humans that used to be in such positions, technology has the ability to perform algorithms that can infringe upon someone's privacy greatly. Without the simple service jobs that AI can now perform, a large sector of the workforce would have no source of income. Society begins to care about these issues more as the technology progresses; with more of a threat to their professions, the general public cares more about technology emerging. Also, this piece discusses how technology fills the needs of people in general, such as transferring money. This article can be very useful as when technology becomes so advanced, it becomes a threat not only to people’s livelihoods, but also to their privacy. Because all of the virtual assistants have their parent companies, such as Apple Incorporated or Amazon, there is a question of what information these companies can harness from their respective virtual assistants. The author, Darrell West, is an esteemed director from Brookings Institution, so this piece is very trustworthy. The audience of this article could be people who want to learn about modern technology and how it can affect their daily lives. I believe that it can add to my page because the article discusses the risks of privacy in depth and the immense amount of infringement that is possible with the sophisticated technology today. The article also discusses how customers can be ignorant about their privacy being infringed upon. Hence, this source is credible and would make a very welcome addition to my article. Ye, Hanlu. 2014. “Current and Future Mobile and Wearable Device Use by People with Visual Impairments.” ACM Digital Library. Although this article is specifically about people with visual impairments, it is still important to note issues of privacy both because everyone matters and because many of the same issues apply to the larger audience. Because people with visual impairments have to have information presented to them in different fashions, this poses a privacy risk. Rather than certain aspects of a device (specifically Alexa or Siri) being virtual, such functions may have to be audible. If one is walking around and exposing either contact or payment information audibly, an immense privacy risk is taken. The survey done with this source ultimately took two groups, one with a disability and one without one; the survey tracked how the users interacted with their devices and how this affected them in intellectual and social manners. This source is important for my research, since I would like to produce an all inclusive article; I do not want to leave any people’s usages of virtual assistants out of my Wikipedia article. Hence, I can now either use the information in this article to add in my other sections or I can create an entire section on technological privacy for people with disabilities. This article will allow me to be even more omniscient when writing my piece. The audience of this piece would likely either be people with disabilities or scholars who are genuinely interested in looking at all aspects of technology. The terminology is fairly understandable, but perhaps for someone of no technology background, the content may be slightly confusing. This article does not seem to have a potential bias, as all of the information is backed up. When I was reading the article, the piece seemed to primarily lay out facts and did not seem to offer any opinions making it a reliable source.


Breadyornot's peer review

I like the general structure of your article, the sections are clear and represent two concrete examples that people in the public sphere can recognize (Alexa and Siri). However, the sentences are a bit unclear within most sections, could use some cleaning up and refining. Also, hyperlinks are needed. I noticed that you wrote a note saying bolded terms are going to be hyperlinked but I'm not sure the bolded text is showing up/if you have started that yet. The article rough draft is a little hard to read since no headings or subheadings are listed, but I'm sure this is an easy fix you can work on closer to the final draft. Most sections have good definitions and examples, something I think is beneficial in describing privacy and digital technology. However, the convenience and safety section is a bit short compared to the others ad could use some more research in that capacity, otherwise I think it would be beneficial to absorb this section into another or state something about lack of sufficient research on the subject to provide a more all-encompassing definition. Also, make sure to add citations, this could be difficult to track down later on when the draft is reworded and revised. Overall, this looks like a great start!

Peer review week 7

Cal.oasis: I think that your lead section does a good job in informing readers on the controversies regarding virtual assistants and what they are. I think you can add a hyperlinks for example with the term, artificial intelligence.

Additionally, I think that the structure of your page is good, but I think you can add additional header sections. For example, right now everything is under subsection titles, but I think you can add additional titles, such as overarching titles that include alexa and sir, and another with the makeup of the device - one layer versus multilayer authentication and privacy by design, another that is about convenience versus safety.

I also thought that some of your sections are unequal and have less information than others. For example, when you compare siri and alexa and in your controversies discussion, there is small amount of information compared to your other sections. I also think that you can add transitions and background information at the beginning of your sections. For example in your apple’s siri section, you start with features that siri has, but you neglect to offer a definition to what siri is.

Overall, good job! I think that if you did a good job explaining the feature of virtual assistants, but need work on giving more background information to this who do not know what they are.

Angryflyingdolphins peer review (week 8)

Your article is well developed. It was very interesting to read through, especially the sections detailing different virtual assistants. Each section is also fairly well balanced in amount of information which is great!

One of the biggest problems I found throughout your draft is wording and sentence structure. A lot of the wording is clunky and some of your sentences are structured in a disjointed way which messes with the flow of your article. For example, the sentence in your lead section: "There are specifically issues in regards to the lack of verification necessary to unlock access to virtual assistants for them to take commands." Although I can understand what you are trying to say, the sentence is a bit hard to read through. There are also a few sentences in which your wording is redundant. For example, in the beginning of your lead section you write "provide assistance to help users." The words assistance and help are synonyms so there is no need for both in one sentence.

Your lead section does a great job of providing readers an overview of your article but you also elaborate too much. Most of the second and fourth paragraphs in your lead section can be cut out and added to the dedicated sections below. The section on one layer vs. multilayer authentication is a bit confusing. The title of the section suggests that you are about to compare two things. However, you only discuss multilayer authentication and Siri. You never define multilayer authentication or clarify if Siri is one type of authentication or the other. You also have multiple sections dedicated explaining the different assistants offered by different companies. I feel like you should consolidate all the assistants into one larger section to help with organization. Your section titled "Implications of Privacy Agreements" should be reworded and some more information should be added to round it out. The title of your final section should also be formatted so that it is dedicated to controversy in general and not just one situation. I feel like there is a high chance that you come across more controversies in your research so you may eventually want to put all controversies together in one larger section.

Overall, great work! Other than wording and formatting, there no other glaring issues. Your tone throughout is unbiased and very encyclopedic. There are hyperlinks and citations throughout which helps round out your article.

  1. ^ Cite error: The named reference :1 was invoked but never defined (see the help page).
  2. ^ Cite error: The named reference :5 was invoked but never defined (see the help page).
  3. ^ Cite error: The named reference :0 was invoked but never defined (see the help page).

Peer review (week 10)

PanadaFantasy:

Lead section: I think the lead section is very brief and provides audience with very clear definition of virtual assistant privacy and the relationship between virtual assistants and privacy. I really like the lead section! Still, I think in the first sentence " Virtual assistants are software technology that assist users complete various tasks", the "assist" should be "assists", which is only a small grammatical error. Besides, I suppose that the last sentence should be relied on some journals, since it points out the purpose of "privacy by design".

Other sections: The structure of the other sections are well formed, from "One layer versus multilayer authentication", "Examples of virtual assistants" to terms of use and AI, controversy, it is very clear to see the structure of this article. In the authentication section, I think maybe in the sentence "A specific instance in which there are issues with the lack of verification necessary to unlock access to the virtual assistants and to give them commands is when an Amazon Alexa is left in a living quarters unattended." the "a living quarters" should be "living quarters". And in the sentence "Thus, with only one barrier to access all of the information virtual assistants have access to, concerns regarding the security of information exchanged are raised", the modifiers "information virtual" should be "virtual information".

Then, in the example section, the word "knowledgable" should be "knowledgeable". I think this section is wonderful since it gives me the concrete examples such as "SIRI" and "Cortana", which makes this part informative and easy to understand. Besides, even though many virtual assistants are mentioned, they are evenly divided, which I think is very good. And the tone of these parts are unbiased and mostly relied on other journals. However, in the last two sections "AI" and "Controversy", the information is not very detailed and deepgoing. I think if possible, more sources should be added to these parts. Lastly, in the subsection "Wizard of Oz approach", I am quite confused by this part since after reading it, I still don't know what is this approach and what's the difference with other traditional approaches. And since "Wizard of Oz approach" does not have a hyperlink, I suppose that it would be better if you can introduce it first with more detailed information.

Overall, from my perspective, this article is very useful to ordinary readers. It is unbiased regarding the tone of every sentences and very reliable as most sentences are based on citations. What I want to suggest is that you can add some vivid pictures to your article and added more citations since there only exist 16 bibliographies.

From Wikipedia, the free encyclopedia

For the Wikipedia article entitled “Information Privacy,” I thought that all of the information was relevant to the topic. However, certain parts of the article were certainly more clear than other parts. There were many examples for “information types,” but a lack of information for the section on “legality.” I believe that clarity would have been better for this segment, rather than simply linking a “main article” and having merely a sentence of description. Being that a wide range of examples were given, I think the article was fairly neutral. Looking at the sources quantitatively, there was certainly enough sources as there are more sources than paragraphs. As the training suggested, there should be at least one source per paragraph. Looking at the sources’ content, they seem to be published books, patents, or news (specifically BBC) reports which are generally thought to be bipartisan. The talk page was very interesting to look at, as people were explaining why they took out or modified certain parts of the article. The fact that some pointed out they were omitting “opinions” for example was nice to hear, as it means they know their Wikipedia rules decently.

For the Wikipedia article entitled “Computer Security,” the information all seemed quite relevant. However, there were some opinions, I thought. Although technology is so prominent in society, saying that computer security is “one of the major challenges of contemporary world” could be a bit of a stretch of the imagination for some. In general the article seemed well organized and complete; the only part I would’ve been more thorough with is “Types of security and privacy,” as I would have likely explained each before linking the pages. There are certainly ample sources, and many seem to be scholarly articles, news articles, or books from university libraries (all of which are reliable sources). On the talk page, people seemed very respectful of others when making edits or editing other people’s information. I would certainly appreciate this on my Talk Page later on this semester.

Both of these articles were fairly well-written. The second certainly had a wider scope, whereas the first was more precise. Both were fairly objective, with the exception of the opinion I pointed out in the second article. Overall, I hope to have pages that attract as many fellow editors as both of these have, as that will ultimately make my article even better.

Citation/Summary of One Point in an Article I researched:

About a quarter of seniors in high school apply to college, and only about half of this selective group end up graduating. [1]

I also cited some sentences with "citation needed" stated in these articles:

"China Part of Editing Privacy Law"

"Privacy Laws of the United States."

I helped the articles become more trustworthy, as still needing a citation is a sign of potential distrust with a source.

I am going to create a page on "Virtual Assistant Privacy" as I believe that this is an important topic that the public should be informed about. I plan to lay out the facts of precisely what these information these virtual assistants gather, what information ends up getting passed on to their parent companies, and how, if it all, these companies are allowed to use this information. In the process, I will decode information about privacy in the AI and technological fields as a whole, as ultimately these virtual assistants act as bridges to bringing information to other media. When one tells Siri about a reminder, Siri then brings that information to the "reminder" app; thus, virtual assistants provide a need for information on privacy modern technology as a whole. Many people in society use these technological devices blindly and without knowledge of the issues that can occur with them. Hopefully, with the facts I plan to lay out in my article, people can be more informed users who compromise their privacy less than they did before they read my article.

An outline of my article would likely include general facts about virtual assistants and AI as a whole. Then, I would connect this information to privacy and ultimately the infringement of people's privacy. This information would all be laid out as facts with an objective point of view.

Formal outline:

Lead Section Outline:

Privacy of Virtual Assistants:

Virtual Assistants are software technology that ultimately provide services to customers through various algorithms that follow the commands customers say. Well known virtual assistants include Alexa and Siri, and these assistants specifically are from their parent corporations named Amazon Incorporated and Apple Incorporated specifically. There are privacy issues concerning what information can go to these third party corporations.

There are specifically issues with regard to the lack of verification necessary for the virtual assistants to take commands. As of right now, there is only one layer of authentication which is the voice; there is not a layer that requires the owner of the virtual assistant to be present. Such privacy concerns have caused technicians to think of ways to have more verification, such as VS Button.

Rather than taking these potential infringements to heart, consumers value the convenience that virtual assistants provide.

Various patents have controlled the requirement of technology, such as Artificial Intelligence, to require Privacy by Design. This way, corporations do not have to build privacy into their designs in the future; designs can be written with privacy in mind. This would allow for a more fail-safe method to make sure that algorithms of privacy would not leave even edge cases out.

Sections I Want to Include:

-Alexa specifically: measures Amazon takes to ensure that users have privacy; measures Amazon "accidentally doesn't take (how does this infringe upon information?)

-Siri specifically:measures Apple takes to ensure that users have privacy; measures Apple “accidentally” doesn’t take (how does this infringe upon information?); how does the corporation use such stolen information?

-One layer versus multilayer authentication: VS button

-Convenience versus safety

-Privacy by design: how does this help to solve the privacy predicament?

-Artificial Intelligence and its relation to virtual assistants: How has artificial intelligence and the standards that they have for privacy shaped virtual assistant privacy?

-How the parent companies influence the virtual assistants? How does Siri versus Alexa’s privacy differ? Is this because of the parent corporations or because Alexa is not as easily accessible on the phone as Siri is?

*bolded words constitute the words that I will probably end up hyperlinking to other pages*

MY ARTICLE FIRST DRAFT

Privacy of Virtual Assistants: Information

Virtual Assistants are software technology that assist users complete various tasks. [2] Well known virtual assistants include Alexa and Siri, and these assistants are from their parent corporations named Amazon Incorporated and Apple Incorporated respectively. Other companies, such as Google and Microsoft, also have virtual assistants. There are privacy issues concerning what information can go to the third party corporations that operate virtual assistants and how this data can potentially be used. [3]

Because virtual assistants are often considered "nurturing" bodies, similar to robots or other artificial intelligence, consumers may overlook potential controversies and value their convenience more than their privacy. When forming relationships with devices, humans tend to become closer to those that perform humanly functions, which is what virtual assistants do. [4] In order to allow users both convenience and assistance, privacy by design and the Virtual Security Button propose methods in which both are possible.

One layer versus multilayer authentication

The Virtual Security button would provide a method to add multilayer authentication to devices that currently only have a single layer; these single layer authentication devices solely require a voice to be activated [5]. Multilayer authentication means that there are multiple layers of security to authorize a virtual assistant to work. This voice could be any person, not necessarily the intended human, which makes the method unreliable. The Virtual Security button would provide a second layer of authentication for devices, such as Alexa, that would be triggered by both movement and the voice combined. [6]

There are issues with the lack of verification necessary to unlock access to the virtual assistants and to give them commands. [6] Currently, there is only one layer of authentication which is the voice; there is not a layer that requires the owner of the virtual assistant to be present. Thus, with only one barrier to access all of the information virtual assistants have access to, concerns regarding the security of information exchanged are raised. Such privacy concerns have influenced the technology sector to think of ways to add more verification, such as a VS Button which would also account for motion in addition to the voice to activate the virtual assistant. [6]

Voice Authentication with Siri

The "Hey Siri" function allows the iPhone to listen through ambient sound until this phrase is spotted. Once this phrase is spotted, Siri is triggered to respond. [7] In order to not always be listened to, an iPhone user can turn off the “Hey Siri” function. This way, the device will not always be listening for those two words and other information will not be overheard in the process. [8] This voice authentication serves as a singular layer, since only the voice is used to authenticate the user.

Examples of Virtual Assistants

Amazon Alexa

This virtual assistant is linked to the " Echo" speaker created by Amazon and is primarily a device controlled by the voice that can play music, give information to the user, and perform other functions [9]. Since the device is controlled by the voice, there are no buttons involved in its usage. The device does not have a measure to determine whether or not the voice heard is actually the consumer [6]. The Virtual Security Button (VS Button) has been proposed as a potential method to add more security to this virtual assistant. [6]

The benefits of adding a VS button to Alexa

The VS button uses technology from wifi networks to sense human kinematic movements [10]. Home burglary poses a danger, as smart lock technology can be activated since there will be motion present. [11] Thus, the VS button providing a double-check method before allowing Alexa to be utilized would lessen such dangerous scenarios from occurring [12]. The introduction of the Virtual Security button would add another level of authentication, hence adding privacy to the device. [13]

Apple’s Siri

Siri is Apple Corporation's virtual assistant and is utilized on the iPhone. Siri gathers the information that users input and has the ability to utilize this data. [3] The ecosystem of the technological interface is vital in determining the amount of privacy; the ecosystem is where the information lives. Other information that can be compromised is location information if one uses the GPS feature of the iPhone. [14] Any information, such as one's location, that is given away in an exchange with a virtual assistant is stored in these ecosystems. [15]

Hey Siri

“Hey Siri” allows Siri to be voice-activated. The device continues to collect ambient sounds until it finds the words "Hey Siri." [16] This feature can be helpful for those who are visually impaired, as they can access their phone's applications through solely their voice. [17]

Siri's Level of Authentication

Apple’s Siri also has solely one level of authentication. If one has a passcode, in order to utilize various features, Siri will require the passcode to be inputted. However, consumers value convenience so passcodes are not in all devices. [6]

Cortana

Cortana, Microsoft's virtual assistant, is another voice activated virtual assistant that only requires the voice; hence, it also utilizes solely the singular form of authentication. [18] The device does not utilize the VS button previously described to have a second form of authentication present. The commands that the device utilizes mostly have to do with saying what the weather is, calling one of the user's contacts, or giving directions. All of these commands require an insight into the user's life because in the process of answering these queries, the device looks through data which is a privacy risk.

Google Assistant

Google Assistant, which was originally dubbed Google Now, is the most human-like virtual assistant. [19] The similarities between humans and this virtual assistant stem from the natural language utilized as well as the fact that this virtual assistant in particular is very knowledgable about the tasks that humans would like them to complete prior to the user's utilization of these tasks. The device practically predicts what the human will want. This prior knowledge makes the interaction much more natural. Some of these interactions specifically are called promotional commands. [20]

Automated Virtual Assistants in Ride Sharing

Ride sharing companies like Uber and Lyft utilize artificial intelligence to scale their scopes of business. In order to create adaptable prices that change with the supply and demand of rides, such companies use technological algorithms to determine "surge" or "prime time" pricing. [21] Moreover, this artificial intelligence feature helps to subside the concern of privacy that was previously taking place in companies like Uber and Lyft when employees were potentially interacting with each other and giving away confidential information. However, even the artificial intelligence utilized can "interact" with each other, so these privacy concerns for the companies are still relevant. [22]

Accessibility of Terms and Agreements

The terms of agreements that one has to approve when first getting their device is what gives corporations like Apple Corporation access to information. These agreements outline both the functions of devices, what information is private, and any other information that the company thinks is necessary to expose. [23] Even for customers that do read this information, the information is often decoded in a vague and unclear manner. The text is objectively a small font and is often considered too wordy or lengthy in scope for the average user. [24]

Privacy by design

Privacy by design makes the interface more secure for the user. Privacy by design essentially means that in a product’s blueprint, aspects of privacy are incorporated into how the object or program is created [25]. Even technology uses that have little to do with location have the ability to track one's location. For example, WiFi networks are a danger for those trying to keep their locations private. Various organizations are working toward making privacy by design more regulated so that more companies do it. [25]

If a product does not have privacy by design, then companies need to add modes of privacy to their products. The goal is for organizations to be formed to ensure that privacy by design is done using a standard; this standard would make privacy by design more reliable and trustworthy than privacy by choice. [25] The standard would have to be high enough to not allow for loopholes of information to infringe upon, and such rules may apply to virtual assistants.

Various patents have controlled the requirement of technology, such as artificial intelligence, to include various modes of privacy by nature. These proposals have included Privacy by Design, which occurs when aspects of privacy are incorporated into the blueprint of a device. [26] This way, corporations do not have to build privacy into their designs in the future; designs can be written with privacy in mind. This would allow for a more fail-safe method to make sure that algorithms of privacy would not leave even edge cases out. [25]

Artificial Intelligence

Artificial intelligence as a whole attempts to emulate human actions and provide the menial services that humans provide, but should not have to be bothered with. [27] In the process of automating these actions, various technological interfaces are formed.

The problem that has to be solved has to do with the concept that in order to process information and perform their functions, virtual assistants curate information. [27] What they do with this information and how the information can be compromised is vital to note for both the field of virtual assistants and artificial intelligence more broadly.

Controversy:

There have been controversies surrounding the opinions that virtual assistants can have. As the technology has evolved, there is potential for the virtual assistants to possess controversial positions on issues which can cause uproar. These views can be political, which can be impactful on society since virtual assistants are used so widely. [28]

Crowdsourcing is also controversial; although it allows for innovation from the users, it can perhaps act as a cop-out for companies to take credit where, in reality, the customers have created a new innovation. [29]

How close is too humanlike for virtual assistants to become?

The Wizard of Oz approach to researching human-robot interaction has been in existence. Specifically, this approach aims to have a human leader of a study fill in for a robot while the user completes a task for research purposes [30]. In addition to humans evaluating artificial intelligence and robots, the Wizard of Oz approach is being introduced. When technology becomes close to being human-like, the Wizard of Oz approach says that this technology has the ability to evaluate and augment other artificial intelligence technology. Moreover, the method also suggests that technology, in order to be utilized, does not necessarily have to be human-like. [31] Thus, in order to be utilized, as long as they have useful features, virtual assistants do not have to focus all of their innovation on becoming more human-like.

See Also

Apple Corporation

Amazon

Artificial Intelligence

Software

Privacy


References

  1. ^ "College admissions in the United States", Wikipedia, 2018-09-21, retrieved 2018-09-25
  2. ^ https://patents.google.com/patent/US9729592B2/en
  3. ^ a b Sadun, Erica; Sande, Steve (2012). Talking to Siri. Que Publishing. ISBN  9780789749734.
  4. ^ Turkle, Sherry. "A Nascent Robotics Culture: New Complicities for Companionship" (PDF). MIT.
  5. ^ Lei, Xinyu; Tu, Guan-Hua; Liu, Alex X.; Ali, Kamran; Li, Chi-Yu; Xie, Tian (2017). "The Insecurity of Home Digital Voice Assistants -- Amazon Alexa as a Case Study". arXiv: 1712.03327.
  6. ^ a b c d e f Lei, Xinyu; Tu, Guan-Hua; Liu, Alex X.; Li, Chi-Yu; Xie, Tian (2017-12-08). "The Insecurity of Home Digital Voice Assistants - Amazon Alexa as a Case Study". arXiv: 1712.03327. {{ cite journal}}: Cite journal requires |journal= ( help)
  7. ^ https://dl.acm.org/citation.cfm?id=3134052
  8. ^ https://dl.acm.org/citation.cfm?id=3134052
  9. ^ López, Gustavo; Quesada, Luis; Guerrero, Luis A. (2018). "Alexa vs. Siri vs. Cortana vs. Google Assistant: A Comparison of Speech-Based Natural User Interfaces". Advances in Human Factors and Systems Interaction. Advances in Intelligent Systems and Computing. Vol. 592. pp. 241–250. doi: 10.1007/978-3-319-60366-7_23. ISBN  978-3-319-60365-0.
  10. ^ Lei, Xinyu; Tu, Guan-Hua; Liu, Alex X.; Ali, Kamran; Li, Chi-Yu; Xie, Tian (2017). "The Insecurity of Home Digital Voice Assistants -- Amazon Alexa as a Case Study". arXiv: 1712.03327.
  11. ^ Lei, Xinyu; Tu, Guan-Hua; Liu, Alex X.; Ali, Kamran; Li, Chi-Yu; Xie, Tian (2017). "The Insecurity of Home Digital Voice Assistants -- Amazon Alexa as a Case Study". arXiv: 1712.03327.
  12. ^ Lei, Xinyu; Tu, Guan-Hua; Liu, Alex X.; Ali, Kamran; Li, Chi-Yu; Xie, Tian (2017). "The Insecurity of Home Digital Voice Assistants -- Amazon Alexa as a Case Study". arXiv: 1712.03327.
  13. ^ Lei, Xinyu; Tu, Guan-Hua; Liu, Alex X.; Ali, Kamran; Li, Chi-Yu; Xie, Tian (2017). "The Insecurity of Home Digital Voice Assistants -- Amazon Alexa as a Case Study". arXiv: 1712.03327.
  14. ^ Andrienko, Gennady. n.d. “Report from Dagstuhl: the Liberation of Mobile Location Data and Its Implications for Privacy Research.” Contents: Using the Digital Library.
  15. ^ Andrienko, Gennady; Gkoulalas-Divanis, Aris; Gruteser, Marco; Kopp, Christine; Liebig, Thomas; Rechert, Klaus (2013-07-19). "Report from Dagstuhl: the liberation of mobile location data and its implications for privacy research". ACM SIGMOBILE Mobile Computing and Communications Review. 17 (2): 7–18. doi: 10.1145/2505395.2505398. ISSN  1559-1662. S2CID  1357034.
  16. ^ https://dl.acm.org/citation.cfm?id=3134052
  17. ^ Ye, Hanlu; Malu, Meethu; Oh, Uran; Findlater, Leah; Ye, Hanlu; Malu, Meethu; Oh, Uran; Findlater, Leah (2014-04-26). Current and future mobile and wearable device use by people with visual impairments, Current and future mobile and wearable device use by people with visual impairments. ACM, ACM. pp. 3123, 3123–3132, 3132. doi: 10.1145/2556288.2557085. ISBN  9781450324731. S2CID  2787361.
  18. ^ López, Gustavo; Quesada, Luis; Guerrero, Luis A. (2018). "Alexa vs. Siri vs. Cortana vs. Google Assistant: A Comparison of Speech-Based Natural User Interfaces". Advances in Human Factors and Systems Interaction. Advances in Intelligent Systems and Computing. Vol. 592. pp. 241–250. doi: 10.1007/978-3-319-60366-7_23. ISBN  978-3-319-60365-0.
  19. ^ López, Gustavo; Quesada, Luis; Guerrero, Luis A. (2018). "Alexa vs. Siri vs. Cortana vs. Google Assistant: A Comparison of Speech-Based Natural User Interfaces". Advances in Human Factors and Systems Interaction. Advances in Intelligent Systems and Computing. Vol. 592. pp. 241–250. doi: 10.1007/978-3-319-60366-7_23. ISBN  978-3-319-60365-0.
  20. ^ López, Gustavo; Quesada, Luis; Guerrero, Luis A. (2018). "Alexa vs. Siri vs. Cortana vs. Google Assistant: A Comparison of Speech-Based Natural User Interfaces". Advances in Human Factors and Systems Interaction. Advances in Intelligent Systems and Computing. Vol. 592. pp. 241–250. doi: 10.1007/978-3-319-60366-7_23. ISBN  978-3-319-60365-0.
  21. ^ https://www.competitionpolicyinternational.com/wp-content/uploads/2017/05/CPI-Ballard-Naik.pdf
  22. ^ https://www.competitionpolicyinternational.com/wp-content/uploads/2017/05/CPI-Ballard-Naik.pdf
  23. ^ https://heinonline.org/HOL/Page?handle=hein.journals/jmjcila27&div=23&g_sent=1&casa_token=&collection=journals
  24. ^ https://heinonline.org/HOL/Page?handle=hein.journals/jmjcila27&div=23&g_sent=1&casa_token=&collection=journals
  25. ^ a b c d Cavoukian, Ann; Bansal, Nilesh; Koudas, Nick. "Building Privacy into Mobile Location Analytics (MLA) Through Privacy by Design" (PDF). Privacy by Design. FTC.
  26. ^ Personal virtual assistant (patent), retrieved 2018-10-30 {{ citation}}: Cite has empty unknown parameter: |issue-date= ( help)
  27. ^ a b 1940-, McCorduck, Pamela (2004). Machines who think : a personal inquiry into the history and prospects of artificial intelligence (25th anniversary update ed.). Natick, Mass.: A.K. Peters. ISBN  1568812051. OCLC  52197627. {{ cite book}}: |last= has numeric name ( help)CS1 maint: multiple names: authors list ( link)
  28. ^ https://static1.squarespace.com/static/53853b6ae4b0069295681283/t/5abd99136d2a739c3f5f0ca5/1522374932029/politics_virtual_assistants.pdf
  29. ^ https://static1.squarespace.com/static/53853b6ae4b0069295681283/t/5abd99136d2a739c3f5f0ca5/1522374932029/politics_virtual_assistants.pdf
  30. ^ http://delivery.acm.org/10.1145/1520000/1514115/p101-steinfeld.pdf?ip=136.152.143.155&id=1514115&acc=ACTIVE%20SERVICE&key=CA367851C7E3CE77%2E3158474DDFAA3F10%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&__acm__=1541007256_445f86dd397d548e295e0d927a776f19
  31. ^ http://delivery.acm.org/10.1145/1520000/1514115/p101-steinfeld.pdf?ip=136.152.143.155&id=1514115&acc=ACTIVE%20SERVICE&key=CA367851C7E3CE77%2E3158474DDFAA3F10%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35&__acm__=1541007256_445f86dd397d548e295e0d927a776f19

Peer reviews and sourcing Information

Tommytheprius Week 9 Peer Review

Some technical things that stood out:

  • You have a lot of terms that look like hyperlinks but are in red. This means that you have tried to insert a hyperlink to a page that doesn't exist, so I'd suggest just taking away the attempted link. You can do this by just clicking on the word(s) and clicking on the red circle with the line through it.
  • Generally, you should use a hyperlink for a key term only the first time you use it. I've noticed some inconsistencies where you either use a hyperlink but not on the term's first use or you repeatedly hyperlink the same terms, such as artificial intelligence.
  • The See Also section seemed like it looked a little weird to me, so I looked at what other pages did and they usually put bullets. I'd recommend using bullets to make it look like more of a list.
  • I think you might want to be more consistent with your citations. You alternate between adding them before the period, right after the period, with a space after the period, and sometimes there is no space between the end of the citation and the start of the next sentence. It just feels like it would flow better if they were all right after the period, which I think is the standard way to add citations.
  • The colons after the "Controversy" and "Privacy of Virtual Assistants" titles seem unnecessary.
  • Is "Virtual Assistants" a proper noun? If not, you could take away the capitalization of the first letters.
  • I believe that the wikipedia trainings say that headings and subheadings should have the first letter of the first word capitalized and the rest of the words in lowercase, unless there are proper nouns, so you might want to adjust some of the headings accordingly. (Ex. Artificial Intelligence, Siri's Level of Authentication, Voice Authentication with Siri)
  • In the reference section, a lot of the links look like repeats. I think you could use the "reuse" option more when adding citations so that citations from the same source accumulate under one number. Also, I'm not completely sure about this, but you may want to put in the ASA citations and not just links for each source.

Comments on content:

  • Lead section:
    • The sentence "Well known virtual assistants include Alexa and Siri, and these assistants are from their parent corporations named Amazon Incorporated and Apple Incorporated respectively." seems a little bulky. A rephrase could be "Well known virtual assistants include Alexa, made by Amazon, and Siri, produced by Apple."
    • Consider adding a citation after this sentence: In order to allow users both convenience and assistance, privacy by design and the Virtual Security Button propose methods in which both are possible.
    • Overall, your lead section is very concise and does a great job of providing and overview for the article!
  • One layer versus multilayer identification:
    • You might want to add a concise definition of "Virtual Security button" because you often reference what it would provide, but I'm not clear on exactly what it is. Also, at one point you use "VS button" without first putting that in parentheses. You could just put VS button in parentheses after the first use and use the abbreviation from that point forward.
    • I'm sort of confused about the concept of multilayer identification. I assume the meaning is that beyond authenticating a single voice (single layer identification), there is some sort of other layer, but I think you could improve this section by adding an example of what another layer could be.
    • When you say that the voice is the only layer of identification, is that referring to any voice or a specific owner's voice? I know that Hey Siri is calibrated to only respond to the voice of the iPhone's actual owner, but I feel like that point isn't completely clear.
  • Examples of virtual assistants:
    • The sentences "The Virtual Security Button (VS Button) has been proposed as a potential method to add more security to this virtual assistant. [1]" and "The introduction of the Virtual Security button would add another level of authentication, hence adding privacy to the device. [2]" seem very similar and maybe redundant, so you could consider cutting one of them.
    • After this sentence: "Siri is Apple Corporation's virtual assistant and is utilized on the iPhone. Siri gathers the information that users input and has the ability to utilize this data. [3]" I think you should add examples of how Siri utilizes this data or to what ends.
    • Some of the information in the "Siri's Level of Authentication" section is very similar to some of the information in the "Voice Authentication with Siri" section. It seems like you might want to consolidate these into just the "Siri's level of Authentication" section because it feels like it belongs with the other examples, not with the single vs multilayer authentication discussion.
    • When you claim that there is a privacy risk in the sentence "All of these commands require an insight into the user's life because in the process of answering these queries, the device looks through data which is a privacy risk." you could add a citation to ensure the reader knows that this is not your subjective thinking.
    • When you say "The device practically predicts what the human will want." it seems pretty subjective, so I'd add a citation.
    • The sentence "Moreover, this artificial intelligence feature helps to subside the concern of privacy that was previously taking place in companies like Uber and Lyft when employees were potentially interacting with each other and giving away confidential information." seems convoluted. You could rephrase by saying something like "Moreover, this artificial intelligence feature helps to allay privacy concerns regarding the potential exchange of confidential user information between Uber and Lyft employees."
  • Accessibility of terms and agreements:
    • The title uses "terms and agreements" but the first sentence says "terms of agreements" - which one is right?
    • You may want to include something about the legal implications of a user agreeing to terms if you have anything related to that in your articles. If the user accepts these terms, are they signing away legal rights?
  • Privacy by design:
    • "If a product does not have privacy by design, then companies need to add modes of privacy to their products." This sentence could appear subjective. Consider adding a citation or rephrasing to not include the word "need". A rephrase could be something like "If a product does not have privacy by design, the producer might consider adding modes of privacy to the product."
  • Artificial intelligence
    • You could add examples of the interfaces mentioned in this sentence: "In the process of automating these actions, various technological interfaces are formed."
  • Controversy:
    • "The Wizard of Oz approach to researching human-robot interaction has been in existence." This sentence sounds a little abrupt." You could rephrase to say something like "One way to research human-robot interaction is called the Wizard of Oz approach." (also I changed this hyperlink to direct to the experiment and not a list of different uses for the term Wizard of Oz)

Overall, I think you've got a great article here! The structure was logical, it had a good tone, and it was a very interesting read. Good job :)

First Draft Peer Review from Midwestmich9

    • I thought your lead section gave a good overview of a scope of your topic and the ideas you wanted to include. I really appreciated how all the sections in your article were mentioned in the lead, because this allows the reader to anticipate what the article is about. There were some sentences that I think you have to make sure you have good sources for, because they seem a little biased. For example the sentence, “Rather than taking these potential infringements to heart, consumers value the convenience that virtual assistants provide” sounds biased because it seems like you’re assuming the attitude of the consumer.  
    • I thought you did a really great job of analyzing the technical aspects of the technologies and identifying how they could be improved to prevent privacy breaches. To improve on the Amazon Alexa section, I suggest adding a sentence or two that describes what Alexa is and giving the reader a little more background before talking about the issues with it. The same could be done for the Siri Section as well. This helps readers who don't have exposure to this technology to fully understand your article. Overall, I thought these sections were well written and I thought it was great how you explained how they related to one another.
    • All the information in the article was well balanced. More information could be added to the convenience versus safety section to make it more balanced to the other sections, but overall the structure is balanced!

Funfettiqueen Peer Review

I think that your lead section is really clear. You provide a great definition of virtual assistants which sets the scene for the rest of the article and gives a clear understanding to readers. In the lead section, though, it seems as though you provide clear concerns regarding privacy. Rather than including this in the lead section, I think it could be helpful to have a section titled "Privacy concerns with virtual assistants". It seems that the points you bring up are rather skeptical about privacy and virtual assistants (which is probably the dominant viewpoint, too), so I think it is important to distinguish that these are concerns. Additionally, I think it could be really interesting to provide a section about the legality of privacy and these virtual assistants, since it seems like the majority of the population finds them pretty concerning. For the different virtual assistants that you detail, I think it would be helpful to provide information about the different technologies and their functionalities. Also, I think you can hyperlink them to existing Wiki pages, but I did not check if those exist. In the section of One layer vs. multilayer, I was a tad confused why you kept using the future tense of "would". Is it not implemented? I was also a little confused when you went into PbD, so it could be cool to have a section called "solutions to privacy concerns" or something like that!

References

Abdelhamid, Mohamed, Srikanth Venkatesan, Joana Gaia, and Raj Sharman. 2018. “Do Privacy Concerns Affect Information Seeking via Smartphones?” IGI Global. In this article, it is eminent that technology has made many tasks that used to be menial much more automated. Siri and Alexa, as well as any other virtual assistants, are user-friendly options that make menial tasks such as using search engines more automated. However, such usages come with a price privacy-wise. The ease of seeking information makes even the most uninterested users use search engines more to search for facts. Thus, a more diverse group of people are exposed to technology through algorithms such as virtual assistants. These people, however, do not consider the increased privacy risk that they are at from using the services with this technology rather than with the original method. As briefly stated in some of my other articles, having virtual assistants as an intermediary creates another barrier for private information to be able to get through. This research will have to do with my article since I am writing on precisely these intermediaries. This article will add specific information about the pros and cons of these intermediaries and whether or not they outweigh using the traditional methods enough to risk the privacy. Obviously, many users will not even consider the privacy aspect, but for those that will, this is a good article to reference. Hence, these interested readers, as well as technological critics, would be the intended audience for this piece. The language can be somewhat tedious to understand, but any interested reader would be able to fully understand the piece. This source is reliable because it is peer reviewed and has very reliable citations as well; there is not a biased present as both pros and cons are produced and addressed throughout the piece. The author does not seem to weigh one side more heavily than the other either, which is vital for the article being objectively interpreted. Andrienko, Gennady. n.d. “Report from Dagstuhl: the Liberation of Mobile Location Data and Its Implications for Privacy Research.” Contents: Using the Digital Library. Through this article, a reader will better understand the potential privacy infringements that are possible through the utilization of smartphones as a whole. Although this article talks about data for the phone as a whole more-so than for the virtual assistants in particular, virtual assistants, generally, end up being another medium toward using the same features present on the phone itself. The article discusses the methods as to how the data that smartphones collect can be misleading and lead a company to the wrong conclusion about users’ lives. Data is practically an “ecosystem” that the analysts have to sort through, and sometimes with little to no contextualization. In addition to the concept of GPS privacy infringement, there is also an infringement potentially on people’s phone calls. The article even directly addresses Siri, Apple’s virtual assistant, by saying that when someone tells Siri to text a message to someone else, Siri now has that information. The article states that there should be a mode of education for the standard user in which they can understand the data ecosystem and how their data is being used. The virtual assistant more-so serves as a medium through which these are features accessible. Hence, this article’s discussion is very prominent to my research. The discussion of the GPS specifically is a vital one to note; when someone says “Hey Siri, take me to the nearest Starbucks,” for example, Siri knows where said-person is. This raises important questions, such as is the default setting of the phone for Siri to know where the user is at all times? Also, what exactly can Apple access from this information, and how does it invade the privacy of the user. In my opinion, this article should be accessible to the public; the issues discussed should be common knowledge for all smartphone users. Lastly, the source is very reliable since it was peer-reviewed by many and was affiliated with a reliable company, IBM. Backer, Larry Catá. 2013. “Transnational Corporations' Outward Expression of Inward Self-Constitution: The Enforcement of Human Rights by Apple, Inc.” The Mutual Dependency of Force and Law in American Foreign Policy on JSTOR. Globalization and societal norms as a whole have been affected by the “constitutions” of mega corporations which has thus shaped societal constitutions. These constitutions serve as protocol that form how legitimate services are run. Constitutions have been less effective over time. The relationship between the constitutions of society and corporations has been shaped by technology. Prior to technological advances, constitutions were unable to interact with one another and intermingle their identity. However, now they overlap which causes citizens to both become multicultural but less connected to their own culture at the same time. Technology mixes worlds like never before, and the world is still adjusting to the issues that this intertwinement can cause. Equilibrium is a vital part of the balance between culture and technology and how this can be thrown off. This source is very useful for my writing as the heart of my article lies with the main virtual assistants such as Siri and Alexa, and this article uncovers the fact that cultural implications of this technology emerge unless if a societal constitution seeks equilibrium. Because culture impacts the public so much, some information about the topic of these main virtual assistants is vital to include into my Wikipedia article. This source will be a good reference point, and will be a great place to jumpstart my investigation into the privacy with the big corporations specifically. The title including antonyms such as outward and inward shows the truly complex nature that technology has with human rights and the field of privacy. Larry Catá Backer, a legal expert, has the ability to paint a clear picture of the human rights that people compromise to technology and how these rights are given away with the use of technology without people knowing it. The credibility of the source is eminent, with Backer as the author and with JStor as the database. The article was peer reviewed, and contains information that I would like to use in my article; thus, this article will be an instrumental piece of research. The audience of this article would likely be law students, perhaps, since the author of the piece is a legal expert. Moreover, the article is written in terms that makes it accessible to the common reader. Brin, David. 1999. “The Transparent Society.” Harvard Journal of Law and Technology. This piece outlines the foreshadowing of technology making everyone’s lives transparent in the near future either gradually or with a major leak. The main question of the piece is that as technology overcomes society as a whole, will those who do not want to risk their information being infringed upon have to not engage in technology? The piece explores the dominance of various corporations and the government in both the management of information and the introduction of devices that control this management. There are few companies that control the bulk of this information, which makes the entire process less fail safe. The piece looks at specific technology and in the process argues that consumers should not have to choose between liberty and being technologically updated. “Reciprocal technology” specifically is proposed, which reminds me of the privacy by design concept present in my other research. This information is useful for my research as these virtual assistants are becoming more and more dominant on the daily, and two main companies control the fate of the actions of the bulk of virtual assistants. This article does seem to have bias, however, as in the introduction specifically, certain words such as “argue” are used which implies that some of the information may be opinionated. However, this source is from the Harvard law library so it is quite a reliable source. When going through information from the source, I will make sure to solely pick out the bipartisan viewpoints and ensure that no opinionated facts make it into my article. Even though this article is not perfect because of the slight amount of bias it contains, I still think the information is useful enough and provides an alternative to privacy by design, so I would like to use this research in the manifestation of my Wikipedia article. Cavoukian, Ann. 2014. “ Building Privacy into Mobile Location Analytics (MLA) Through Privacy by Design.” Aisle Labs. This article discusses the dangers that a lack of privacy in location tracking technology can cause. If a company can gather enough information about a person, they can perhaps figure out other facets of a person's life or livelihood. The goal is to gain a "win win relationship" between the consumer and producer by creating "privacy by design." This terminology essentially means that in a product’s blueprint, aspects of privacy are incorporated into how the object or program is created. Even technology uses that have little to do with location have the ability to track one's location. For example, wifi networks are a danger for those trying to keep their locations private. Various organizations are working toward making privacy by design more regulated so that more companies do it. This article is vital for my article since I need to find the relationship between privacy by design and virtual assistants. I will compare sources between this article, ones specifically on virtual assistants, and ones for technology privacy in general. This is an interesting piece because it looks at privacy in technology from the producer's point of view. Rather than looking at the output that a product has with regards to privacy, this article looks at the making of a product's infrastructure and how privacy can be incorporated. If a product does not have privacy by design, then companies need to add modes of privacy to their products. The goal is for organizations to be formed, as described in the article, to ensure that privacy by design is done by standard; this standard would make privacy by design, hopefully, more reliable and trustworthy than privacy by choice. The standard would have to be high enough to not give loopholes of information to infringe upon, and such rules would have to apply to virtual assistants in order to help out my article's subject. This article is written by Cavoukian, a PhD and has also been peer reviewed and edited from two other scholars. Thus, this article is very trustworthy and will be quite useful in my argument. The audience of this piece would likely be those interested in the privacy of common products they use or people who are perhaps planning on creating a product that they wish to be trustworthy themselves. Cooper, Robert. 2004. “Personal Virtual Assistant.” United States Patent 27–65. This source is a patent about personal virtual assistants, which describes how the behavior of the technology is affected how users input information. A remote computer is the technology that ultimately makes the device conform to its algorithms. Various words and techniques of the voice to convey emotion also affect the response from the virtual assistant to whatever information is thrown at it. Many professionals have actual human assistants to help them complete menial labor that is not worth their time, such as cold calling. Virtual assistants provide this service to the wider market. The system of “varying voice menus” and “voice response systems” are patented in this document. The ability to adapt behavior is one that is innovative for technology and is unable to be duplicated in the precise method that this patent lays out. The “Voice Activation,” or VA system, provides a method for which how the device is triggered. Connecting this to my other sources, there is a source for Siri that uses the “Hey Siri” feature. This feature, if activated, essentially permits the device to constantly be listening for those words. In the process, as discussed in other articles, information that does not concern those two words can be infringed upon. Ultimately, this article proves that the technology concerning virtual assistants is proprietary and should not be duplicated. This fits into my article’s scope because the laws concerning patents and privacy intertwined ultimately influence how privacy by design can be upheld. Without legal boundaries, some companies may not bother to have privacy by design. The patent responds to laws, as well as customer demand, and shapes how the device’s algorithms manage aspects of their operation, such as privacy; the way that these algorithms differ is what makes the patent feasible and thus proprietary. Since this is a legal document, there is no bias associated, making it a reliable source. The patent is on Google, which is the holder of a virtual assistant itself, so the source is a reliable one. The intended audience is likely a technical audience, since the patent uses legal terminology. Perhaps a lawyer for the tech companies would be very interested to figure out what can be duplicated and what is proprietary. This article will be instrumental for my research since I should include the case law surrounding the article so that it covers all of the topics surrounding the privacy of virtual assistants possible. Damopoulos, Dimitrios. 2013. “User Privacy and Modern Mobile Services: Are They on the Same Path?” Contents: Using the Digital Library. Although virtual assistants are primarily for the primary user of the device being utilized, the receiving user is an important facet of the exchange as well. Many of the functions of virtual assistants have to do with calling or texting someone else, as in another user. Thus, not only the user’s privacy is at stake but also this receiving user as well. Many of the articles overlooked this fact, which is why delving deeply into this article seemed so attractive. Some malware testing is used in a study in this article to ultimately test out the privacy from both sides. The study results are shown in this article, along with the technological features that contribute to how much privacy there is or lack thereof. In all of my other articles, the privacy was solely discussed from the point of view of the owner of the device using the virtual assistant; this article has a different point of view and analyzes the other side which is useful for an objective Wikipedia article. Thus, I now have information to include about both parties of an exchange with the usage of virtual assistants. This source’s author has many affiliated authors and reliable sources cited throughout, so the article is reliable from that standpoint. Moreover, the audience of this article would have to be technical as various aspects of code and other technology is described that the average user may not be able to fully understand. Computer science jargon is used that many, including myself, do not fully understand without using other resources. However, some of my other articles lack this technological knowledge, so if I utilize some of these terms and then hyperlink them to their respective pages, my article may seem more well-informed than it would have been without my researching this article. Haake, Magnus. 2008. “Visual Stereotypes and Virtual Pedagogical Agents.” JStor. This article discusses the use of virtual devices in traditional mediums such as newspapers. The article outlines the pros and cons of intervening in these traditionally not technological mediums. Although technology allows for new features to be added to these traditional mediums, for mediums such as newspapers, the traditional aspect of using the page is lost. Specifically, virtual pedagogical agents are evaluated and the pros and cons of their introduction to traditional mediums is discussed. The virtual assistants are replacements for functions that used to be manual labor; thus, an element of personability is lost. This gap is what technology tries to fill with the personalization aspects discussed in some of my other articles. Apple tries to make technology personalizable to aspects of one’s life such as culture with the various accents. Also, Apple has made Siri have an amicable front with the ability of Siri to call the user by their preferred names. Hence, this article connects to my research since it discusses the virtualization of facets of labor or items that used to be manual or tangible respectively. The source is reliable since it is a peer-reviewed source found on JStor. The article is not biased, as it lays out all of the facts in an objective manner. The source does not seem to sway in either direction of wanting technology to intervene. The intended audience would really be anyone who is interested, as the language is not particularly scholarly, so the average audience would understand. This article will be relevant for my research since privacy is one of the aspects that determines whether or not this technological intervention is necessary or detrimental. I can create a portion of my article or add to one of the existing portions and outline the pros and cons of technological intervention specifically for the fields that virtual assistants address. Lei, Xinyu. 2017. “The Insecurity of Home Digital Voice Assistants – Amazon Alexa as a Case Study.” This article is a case study on home digital assistants and how they ultimately are insecure. The case study specifically focuses on the solely single authentication method that allows the devices to take commands from any voice even when the owner is not around. The article implies that there should be a more secure method to know whether or not the owner is around so that voice commands are more secure. Even with potential infringements, virtual assistants are growing in scope. People care about convenience, so they choose to be ignorant about these issues. Main issues occur when the owner is not present with the virtual assistant, because then the algorithms of the technology is all that separates infringement from occuring. The virtual assistants, then often transfer information to their respective corporations such as Apple and Amazon. A “VS Button” is ultimately the second factor of authentication that would make virtual assistants more secure and less easy to infringe upon. This article is reliable since it is from Michigan State University, a valid institution, and is peer edited. Ultimately, the article is made for scholars as the terminology is quite technical and the average user may not understand. The potential bias present seems to be the fact that the article does not really provide that much of a counterclaim as to the potential fact that virtual assistants may be more trustworthy than the article is letting on and that the author may be paranoid. This article will be helpful for my research since I am investigating the privacy of virtual assistants, and this article provides a multitude of reasons for the lack of privacy with the technology. There are many technical terms that I could use that could potentially become hyperlinks and other articles in the future. This article will be necessary for my research, specifically, since this will provide me with the technical terms that some of my other articles may lack. Being that I want my article to be informative, these technical terms should be an important part of the points I make. Mariani , Joseph. 2014. “Natural Interaction with Knowbots, Robots, and Smartphones.” Springer. This book, in chapters 3 and 4, describes a study where the dialog systems are used by controlled study participants. However, as seen with the results, their use is not necessarily indicative of the everyday user. Because a lot of data is involved now, studies are needed with unknowing human subjects. If personal data is involved in a study, users are less likely to be honest with their usage since their personal information, if they give it, may be compromised. However, if the studies do not include gathering data or if the data is erased right away, then the subjects are more likely to be honest with their usage. Maintaining the privacy of the subjects is very necessary and the chapter lays out how to do so. Maintenance of the system is also necessary, to maintain the privacy by design. As seen in chapter 4, speech based interaction is vital to look at; for my research with virtual assistants, their usage revolves around speech based interactions. Multiparty settings, if applicable make the system even more complex. This information can be useful for my article because it specifically talks about the technology of voice applications in general, which can be seen in more devices than virtual assistants alone. However, since the technology still applies to virtual assistants, it can still be applied to my article. This book provides me with new knowledge on the technology which will be useful. Moreover, this book is trustworthy as it was curated on Google Scholar and is peer-reviewed. The audience would likely predominantly be a computer science scholar since he or she would understand the code models or graphs that are shown in this piece. However, even though some of the information went over my head, I still think this article will be useful for my research. McCorduck, Pamela. 2009. “Machines Who Think | A Personal Inquiry into the History and Prospects of Artificial Intelligence.” Taylor & Francis. This annotation is specifically on the thirteenth chapter of the book in Part V “Tensions of Choice.” This chapter discusses the implications of Artificial Intelligence and if it is moral. Because privacy has to do with the morality of the infringement of information, this article is applicable for my research. Because this chapter is in the latter part of the book, this chapter has the foundation of the fact that artificial intelligence can be created. Thus, this chapter is not questioning the possibility of artificial intelligence but rather the morality of it which plays a factor in the privacy of AI as a whole. This chapter questions if it should be allowed for artificial intelligence to be taken to as far of a level as it will inevitably get. The chapter goes through several arguments, some on the technical side in saying that artificial intelligence is helpful for society so it should be considered an acceptable invention with others saying that the morality is simply not okay. For the moral arguments, the piece draws comparisons with the concept of various activities being alright to think about but not to actually follow through with them; this stance is taken for AI as a whole. The autonomy of AI is both a helpful and scary thing. This chapter is useful for my research because many of the reasons that the chapter outlines for the issues with AI have to do with how intrusive the technology can be. Thus, the chapter will be useful for the writing of my article. I can include a section about the issues with morality and how these intrude upon users’ privacy. This piece is also very trustworthy since it utilizes many experts, including technology scholars at the top of their field at the Massachusetts Institute of Technology. This article would mostly be useful for a technical audience, or at least readers who are familiar with some technical or AI terms in general. Hence, this useful and legitimate source will help me in my research and has influenced a potential sub-section and hyperlink of the morality of privacy that I would like to add to my article. Miner, Adam S. 2016. “Smartphones and Questions About Mental Health, Interpersonal Violence, and Physical Health.” JAMA. This article has to do with the manner in which Virtual Assistants respond to issues about mental health or potential suicide risks. The way that these virtual assistants gather this information is by someone’s command to them. Sometimes, if smartphones are the only resource that people with mental issues have, it is vital for these smart devices to have responses either aiding the scenario or sending the human to the right place to look for help. This article discusses the various responses and the need for continuity and completeness of the answers that virtual assistants provide. This article’s process to go about looking at this continuity and completeness was a trial in which they tested the questions. Each virtual assistant, then, was ranked depending on their issues with the results curated from this trial. This information would be an interesting addition to my article for a multitude of reasons. For one, there has to be privacy associated with this issue. If students tell a teacher in school about a mental health issue and if it is serious enough, teachers are mandated reporters. How is the privacy with Siri any similar or different? Another important topic to question is whether or not Apple gets such data and what they can do with it. The audience focused on this article should be tech companies perhaps to see how they compare to other virtual assistants, but also humans in general to support the devices that care more about human health. This article was written with mental health specialists and technologists together, so the information is quite valid. Hence, this article provides an alternative aspect of virtual assistants to look at. When audiences and society think about virtual assistants, the way that they respond to mental health risks is not of utmost notoriety normally; however, as seen in this article, this is a vital thing to note. Najaflou, Yashar. 2013. “Safety Challenges and Solutions in Mobile Social Networks.” Arxviv. All apps on the technology that virtual assistants are used on, even Facebook, have the potential to be infringed upon for the purpose of the data of the user being compromised. For example, if one were to say to Siri “post that picture with my mom on Facebook,” which I tried while reading this article, Siri will respond by saying that she has to access your Facebook data to do so. For the average user, he or she may not understand the implications of compromising this data or the fact that there is potential for the data to be used for more than merely posting a photo. As the article discusses, MSN or mobile social networks, provides an environment for data sharing. However, there are many challenges both logistic wise and privacy wise which this article discusses. There are trust and security issues with MSN, which for the informed user, can raise red flags potentially leading to them not using the services. Because my article has to do with the privacy of various data that can be stored with virtual assistants, this article is very relevant to my research; the virtual assistants specifically are tied to the applications discussed in this article. Virtual assistants make this information even more faulty privacy wise, however, since information now goes through two mediums rather than just one. This article is fairly trustworthy as it is worded objectively and is written from a nonpartisan standpoint. The audience would have to someone familiar with the jargon of MSNs, or else further research would be required. Thus, people with technological backgrounds are more likely to fully understand this article than the average user. However, the average user should know about this information since the main topic discussed is applicable to the average user; ultimately the user themselves are going to be the ones who decide whether or not to take privacy concerns seriously or not. Quast, Holder. 2013. “US9571645B2 - Systems and Methods for Providing a Virtual Assistant.” Google Patents. Since the law terminology is vital in determining how much privacy is necessary versus how much is simply ethically moral by virtual assistant parent companies, looking at the case law is vital. This patent specifically has to do with the act of a virtual assistant calling someone; an example of the command to warrant this would be “Hey Siri, call Denis.” In addition to this command having to do with this article, it also has to do with the “Hey Siri” potential privacy infringements described in my other articles. This article specifically talks about the interface behind virtual assistants and precisely what parts contribute to the act of the virtual assistants performing a call. The embodiments for the precise protocol is described; in the process, the act of where the information is stored and how it can be utilized is laid out. Also, through asking the virtual assistant to call someone else, in the process, the virtual assistant also looks at all of the contact information; how much can be curated out of that is described in the article as well. This article will be useful for my research because the call function, is ultimately, the intended purpose for a phone. Hence, the case law surrounding the ways in which virtual assistants can affect this main usage of a phone is vital to note for my research. The audience of this patent should be anyone who would like to know the technical terms and expectations for virtual assistants. This article is very reliable as well since it is a patent that has been legally reviewed. Nuance Communications Incorporated endorsed this patent, which would potentially cause a bias. However, the company is not one of the main ones that I will bring up in my research, as Wikipedia is for the common user. Because of this, I would like to more-so bring up companies and virtual assistants that the common user would recognize, such as “Amazon Alexa” and Apple’s “Siri.” Hence, this reliable source will be useful in the technical and legal aspects of my research. Rao, Ashwini. n.d. “Expecting the Unexpected: Understanding Mismatched Privacy Expectations Online.” Usenix. Because users are ultimately the ones who are affected by privacy infringements, their expectations of privacy are the ones that truly matter. Because the public does not know the implications of privacy infringement, they often do not bother to go through the dense policies that they have to agree to before using a device. There needs to be a way to match what consumers expect and what companies give back to them privacy wise. When users first think of a website or internet service, if a website has properly done its job, they think of the intended purpose of the service; users often overlook the infringements that come along with the service. This article specifically surveyed users to try to estimate what user expectations are when using an online service. Various characteristics of the websites are what often make someone have a view of whether or not privacy by design was valued. Mismatches between results and expectations were covered as well. This article is useful for the audience of readers that would like to find out what the public values in online infrastructure. This article will be useful for my research since it is necessary to know what the public expects and what their standards are for virtual assistants’ privacy; this concept is what will decidedly shape the potential innovations in privacy disclosures that may be necessary in the future. If my article on Wikipedia contains the background of what people expect, such information will be visible to a wider audience which is a necessary stepping stone for innovations to be made. This article was made in conjunction with professors from Carnegie Mellon University as well as tech professionals, so it is a reliable source to be used that will help sum up the relevance of my article for the daily users of technology. Sadun, Erica. 2013. “Talking to Siri.” Google Books. I specifically read the first chapter entitled “Getting Started with Siri.” By using Siri, people send information to Apple, such as who their contacts are, music taste, and fitness statistics. Apple has the ability to send this information. The privacy policy is shown and the book states how users are “encouraged” to read this long policy prior to using Siri. This book is more-so from the point of view of trying to show users what the device has to offer; thus, although the book does touch on the fact that Siri does give Apple a lot of personal information that can potentially be used by third-party companies, the book encourages the user to utilize all of Siri’s functions that ultimately infringe upon privacy nonetheless. The rest of the book other than this chapter is essentially a guide to using Siri; in the process, the implications that are possible are briefly discussed. This chapter is useful for my article since it discusses how infomration is sent to third parties; now I can have a source to cite this information. I knew that this fact was the case, but before I had no evidence. Thus, even though this book does have a bias of encouraging the user to use Siri, it is still useful for my research. The authors are very reliable, as they are known authors in this field. Also, the book was curated on Google Scholar making the source reliable. Ultimately, the book’s audience is an Apple Siri user, as the book does outline the ways in which the device can and should be used. Thus, even though this source is potentially biased, its evidence is still very useful for my article. I plan to use this book to cite information that I needed a source for, and this concept is one of the main points of my article, so this book helps me work toward the goal of creating my article.

TELTZROW, M. AX IM IL IA N. 2004. “IMPACTS OF USER PRIVACY PREFERENCES ON PERSONALIZED SYSTEMS.” Springer. This article discusses how modern technological algorithms have made preferences available that were unavailable prior to technological advances; such advances are especially utilized in an educational sector where every person learns differently. The article discusses the need for a balance to be created between privacy and personalization, as personalization reveals a significant amount of information about a person. Various data is inputted with personalization, and such information privacy can be more easily compromised than a secure set of data that cannot be touched. Virtual assistants are some of the “user adaptive” systems, even though the article does not address them directly since they emerged after the article was created in 2014. Users have the ability to tell Siri to talk in a certain accent or call them a particular nickname, which are all actions that essentially make the code personalizable. Thus, this article connects to what I would potentially want to put into my Wikipedia article; a good additional section that I failed to think of prior to reading this article would be the privacy of the aspects in particular that are customizable, and if these facets are easier infringed upon than the set in stone concepts. This article is peer-reviewed and from a reliable publisher, so it is a reliable source; as I outlined above, it is a useful source for my findings. There does not seem to be a potential bias, as the information is fairly objective and statistical. The audience is likely a technology expert who would be able to understand the terminology used. Overall, this article is an interesting source, as it showed that virtual assistants were derived from something prior to their existence; specifically this article discusses the privacy of personalizable technology, which is precisely what the virtual assistants are as they are objects that are made to help a user personally. Turkle, Sherry. 2006. “New Complicities for Companionship.” MIT. This article talks about the psychology of robots with society; although virtual assistants are not physical robots that walk around, they hold human roles, as they have names like Siri and Alexa. Hence, even though this article is not about virtual assistants specifically, the information is very valid for my research as a lot of it revolves around the psychology, or perception of privacy, that humans have. This article discusses how humans are allowing robots into their everyday routines. Robots do things for the consumer, and do them in an uncanny human fashion. In fact, by completing tasks for us, robots “nurture” the customer. Generally, people like what nurtures them; the article even goes to the length of comparing the comforting feel of a doll with the guardianship that robots provide for the consumer. The concept of robots, as well as virtual assistants, is to as closely mirror human traits as well as possible. The article also discusses how humans in need, such as young children or senior citizens, would get more attached than a self-functioning adult. Because feeling comfortable with a device implies trust, the nurturing relationship that virtual assistants provide can be an issue for people concerned with privacy. Similarly to a friend telling another friend their secrets, if one trusts robots or virtual assistants, one may overshare information. Hence, this article is good for my research because it gives me good psychological information that my other articles lack about why, other than convenience, humans expose themselves to privacy infringement. The juxtaposition of psychological and technological evidence will be good in adding credibility to scholars of all backgrounds to my article. This article would be relevant to audiences from both psychological and technical backgrounds; however, it is fairly technical. This article is quite reliable as it is from a scholar from MIT who has background in sociology and psychology; since she is from a technical school, she also possessed the technical information and resources necessary to write a paper on robots. Hence, this article is useful for my research and is a reliable source to look at. West, Darrell. 2018. “The Future of Work: Robots, AI, and Automation.” This annotation is on the second chapter of the book, and is called “Artificial Intelligence.” This portion describes the automation of tasks that led to the arise of virtual assistants and how this automation affects daily life greatly. Virtual assistants take jobs from humans, but are very helpful and convenient at the same time. Big political predicaments occur due to artificial intelligence as a whole. The social impact of robots and AI is very important to society and the government as a whole. With AI taking jobs from people on the daily, there is an imposing economic threat that is bound to erupt if technology keeps going in the same direction. Throughout the process of AI, information is gathered. Unlike the humans that used to be in such positions, technology has the ability to perform algorithms that can infringe upon someone's privacy greatly. Without the simple service jobs that AI can now perform, a large sector of the workforce would have no source of income. Society begins to care about these issues more as the technology progresses; with more of a threat to their professions, the general public cares more about technology emerging. Also, this piece discusses how technology fills the needs of people in general, such as transferring money. This article can be very useful as when technology becomes so advanced, it becomes a threat not only to people’s livelihoods, but also to their privacy. Because all of the virtual assistants have their parent companies, such as Apple Incorporated or Amazon, there is a question of what information these companies can harness from their respective virtual assistants. The author, Darrell West, is an esteemed director from Brookings Institution, so this piece is very trustworthy. The audience of this article could be people who want to learn about modern technology and how it can affect their daily lives. I believe that it can add to my page because the article discusses the risks of privacy in depth and the immense amount of infringement that is possible with the sophisticated technology today. The article also discusses how customers can be ignorant about their privacy being infringed upon. Hence, this source is credible and would make a very welcome addition to my article. Ye, Hanlu. 2014. “Current and Future Mobile and Wearable Device Use by People with Visual Impairments.” ACM Digital Library. Although this article is specifically about people with visual impairments, it is still important to note issues of privacy both because everyone matters and because many of the same issues apply to the larger audience. Because people with visual impairments have to have information presented to them in different fashions, this poses a privacy risk. Rather than certain aspects of a device (specifically Alexa or Siri) being virtual, such functions may have to be audible. If one is walking around and exposing either contact or payment information audibly, an immense privacy risk is taken. The survey done with this source ultimately took two groups, one with a disability and one without one; the survey tracked how the users interacted with their devices and how this affected them in intellectual and social manners. This source is important for my research, since I would like to produce an all inclusive article; I do not want to leave any people’s usages of virtual assistants out of my Wikipedia article. Hence, I can now either use the information in this article to add in my other sections or I can create an entire section on technological privacy for people with disabilities. This article will allow me to be even more omniscient when writing my piece. The audience of this piece would likely either be people with disabilities or scholars who are genuinely interested in looking at all aspects of technology. The terminology is fairly understandable, but perhaps for someone of no technology background, the content may be slightly confusing. This article does not seem to have a potential bias, as all of the information is backed up. When I was reading the article, the piece seemed to primarily lay out facts and did not seem to offer any opinions making it a reliable source.


Breadyornot's peer review

I like the general structure of your article, the sections are clear and represent two concrete examples that people in the public sphere can recognize (Alexa and Siri). However, the sentences are a bit unclear within most sections, could use some cleaning up and refining. Also, hyperlinks are needed. I noticed that you wrote a note saying bolded terms are going to be hyperlinked but I'm not sure the bolded text is showing up/if you have started that yet. The article rough draft is a little hard to read since no headings or subheadings are listed, but I'm sure this is an easy fix you can work on closer to the final draft. Most sections have good definitions and examples, something I think is beneficial in describing privacy and digital technology. However, the convenience and safety section is a bit short compared to the others ad could use some more research in that capacity, otherwise I think it would be beneficial to absorb this section into another or state something about lack of sufficient research on the subject to provide a more all-encompassing definition. Also, make sure to add citations, this could be difficult to track down later on when the draft is reworded and revised. Overall, this looks like a great start!

Peer review week 7

Cal.oasis: I think that your lead section does a good job in informing readers on the controversies regarding virtual assistants and what they are. I think you can add a hyperlinks for example with the term, artificial intelligence.

Additionally, I think that the structure of your page is good, but I think you can add additional header sections. For example, right now everything is under subsection titles, but I think you can add additional titles, such as overarching titles that include alexa and sir, and another with the makeup of the device - one layer versus multilayer authentication and privacy by design, another that is about convenience versus safety.

I also thought that some of your sections are unequal and have less information than others. For example, when you compare siri and alexa and in your controversies discussion, there is small amount of information compared to your other sections. I also think that you can add transitions and background information at the beginning of your sections. For example in your apple’s siri section, you start with features that siri has, but you neglect to offer a definition to what siri is.

Overall, good job! I think that if you did a good job explaining the feature of virtual assistants, but need work on giving more background information to this who do not know what they are.

Angryflyingdolphins peer review (week 8)

Your article is well developed. It was very interesting to read through, especially the sections detailing different virtual assistants. Each section is also fairly well balanced in amount of information which is great!

One of the biggest problems I found throughout your draft is wording and sentence structure. A lot of the wording is clunky and some of your sentences are structured in a disjointed way which messes with the flow of your article. For example, the sentence in your lead section: "There are specifically issues in regards to the lack of verification necessary to unlock access to virtual assistants for them to take commands." Although I can understand what you are trying to say, the sentence is a bit hard to read through. There are also a few sentences in which your wording is redundant. For example, in the beginning of your lead section you write "provide assistance to help users." The words assistance and help are synonyms so there is no need for both in one sentence.

Your lead section does a great job of providing readers an overview of your article but you also elaborate too much. Most of the second and fourth paragraphs in your lead section can be cut out and added to the dedicated sections below. The section on one layer vs. multilayer authentication is a bit confusing. The title of the section suggests that you are about to compare two things. However, you only discuss multilayer authentication and Siri. You never define multilayer authentication or clarify if Siri is one type of authentication or the other. You also have multiple sections dedicated explaining the different assistants offered by different companies. I feel like you should consolidate all the assistants into one larger section to help with organization. Your section titled "Implications of Privacy Agreements" should be reworded and some more information should be added to round it out. The title of your final section should also be formatted so that it is dedicated to controversy in general and not just one situation. I feel like there is a high chance that you come across more controversies in your research so you may eventually want to put all controversies together in one larger section.

Overall, great work! Other than wording and formatting, there no other glaring issues. Your tone throughout is unbiased and very encyclopedic. There are hyperlinks and citations throughout which helps round out your article.

  1. ^ Cite error: The named reference :1 was invoked but never defined (see the help page).
  2. ^ Cite error: The named reference :5 was invoked but never defined (see the help page).
  3. ^ Cite error: The named reference :0 was invoked but never defined (see the help page).

Peer review (week 10)

PanadaFantasy:

Lead section: I think the lead section is very brief and provides audience with very clear definition of virtual assistant privacy and the relationship between virtual assistants and privacy. I really like the lead section! Still, I think in the first sentence " Virtual assistants are software technology that assist users complete various tasks", the "assist" should be "assists", which is only a small grammatical error. Besides, I suppose that the last sentence should be relied on some journals, since it points out the purpose of "privacy by design".

Other sections: The structure of the other sections are well formed, from "One layer versus multilayer authentication", "Examples of virtual assistants" to terms of use and AI, controversy, it is very clear to see the structure of this article. In the authentication section, I think maybe in the sentence "A specific instance in which there are issues with the lack of verification necessary to unlock access to the virtual assistants and to give them commands is when an Amazon Alexa is left in a living quarters unattended." the "a living quarters" should be "living quarters". And in the sentence "Thus, with only one barrier to access all of the information virtual assistants have access to, concerns regarding the security of information exchanged are raised", the modifiers "information virtual" should be "virtual information".

Then, in the example section, the word "knowledgable" should be "knowledgeable". I think this section is wonderful since it gives me the concrete examples such as "SIRI" and "Cortana", which makes this part informative and easy to understand. Besides, even though many virtual assistants are mentioned, they are evenly divided, which I think is very good. And the tone of these parts are unbiased and mostly relied on other journals. However, in the last two sections "AI" and "Controversy", the information is not very detailed and deepgoing. I think if possible, more sources should be added to these parts. Lastly, in the subsection "Wizard of Oz approach", I am quite confused by this part since after reading it, I still don't know what is this approach and what's the difference with other traditional approaches. And since "Wizard of Oz approach" does not have a hyperlink, I suppose that it would be better if you can introduce it first with more detailed information.

Overall, from my perspective, this article is very useful to ordinary readers. It is unbiased regarding the tone of every sentences and very reliable as most sentences are based on citations. What I want to suggest is that you can add some vivid pictures to your article and added more citations since there only exist 16 bibliographies.


Videos

Youtube | Vimeo | Bing

Websites

Google | Yahoo | Bing

Encyclopedia

Google | Yahoo | Bing

Facebook