This is the
talk page for discussing improvements to the
Trusted Computing article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google ( books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
Archives: 1, 2, 3 |
Trusted Computing is a former featured article candidate. Please view the links under Article milestones below to see why the nomination failed. For older candidates, please check the archive. | ||||||||||
|
This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||||||||||||||
|
|
|||
The problem with this article is that it has weasel words. Here's some examples.
2nd paragraph
3rd paragraph
The nature of trust
There's still more, but this is enough to warrant the {{weasel}} tag.
please see my reasons and discuss at Talk:Trustworthy_Computing#reasons for proposed merge. ObsidianOrder
I didn't put the link in originally but I replaced it, they are actually pretty central to this field, they own many patents in the area and have developed things like trusted keyboards &ct. -- Gorgonzilla 20:41, 8 June 2006 (UTC)
Given the current article which equates trusted computing with the TCG which is in turn a commercial consortium it is somewhat difficult to understand that position. Not that I can quite see why having made the decision TCG is the only game in town that there should be a separate article. I admit I have not done a lot in TCG and only attended one meeting, the very first. But there are a lot more games in the trusted/trustworthy space. -- Gorgonzilla 00:52, 10 June 2006 (UTC)
Wow, check out the Italian version of this article, which got FA status. I don't speak a word of the language, but perhaps there is an opportunity here to improve the English-language version of the article? -/- Warren 18:03, 15 July 2006 (UTC)
There appears to be a heavy bias towards a paranoid viewpoint espoused by a small minority in the "Disputed issues" section which lends undue credit to fearmongers. This portion of the article should either be substantially shortened (and "what-if" clauses removed) or the responses of competent professionals who have denounced such myths should be added. —Preceding unsigned comment added by 71.98.89.174 ( talk • contribs)
I hope I'm not screwing up someone's legit system, but I'm going through and removing a lot of non-grammatical spaces (i.e. a space before a punctuation mark or double/triple spaces b/w words. Sorry if this is any problem -- Gbinal 01:44, 15 August 2006 (UTC)
The section on "Proposed owner override for TC" seems to be just another disputed issue. Some people think that it is a good idea, and some people don't. I suggest putting it with the Disputed Issues. Also, it is written with a very anti-TC POV. It complains that the TC folks have refused to scrap a feature in order to please the anti-TC folks. Yes, that's right, just like they disagree on other issues. It should just describe the dispute. Roger 01:29, 23 September 2006 (UTC)
Add a section on the partisan objections of RMS and the GNU project as well as from the Free Software Foundation on how they claim this impurifies Alan Turing's Universal computer Theory -- That is a computer is a machine that can do the same function as any other existing machine (printing press, fax, polygraph, cassette tapes, records, radio, television, etc) and how trusted computing can possibly limit the computers' abilities to do these things.
Thanks, -- Mofomojo 06:10, 27 September 2006 (UTC)
``The U.S. army has also stated that every new PC bought by the army must support trusted computing [3]" - the referenced article DOES NOT state this.
You write at footnote 4 that "the link [no longer states] that pc must have a TPM." That doesn't mean that the Army dropped the requirement, does it? 10/21/2006 Jvsullivan 19:26, 6 November 2006 (UTC)
I don't mean to be trouble to you, but you're making the affirmative assertion that the requirement was dropped. To base that affirmative assertion on the absence of evidence that it wasn't dropped doesn't seem very encyclopedic. If I come across a reiteration of the requirement, I'll certainly point it out. But I think you should reconsider characterizing the absence of a reiteration as a reversal. Thanks for your attention. Jvsullivan 17:32, 21 October 2006 (UTC)
"We have to suppose"? The cited source says nothing to support the proposition that the requirement was dropped. What is it about an FAQ page that happens not to mention the continued existence of the requirement that compels publication of a supposition that the requirement has been dropped? This isn't adding up. Please take a look at the list accomplishments under strategic goal 3 in this October 2006 Army publication: http://www.army.mil/ciog6/news/500Day2006Update.pdf : "Require Trusted Platform Module (TPM) 1.2 for new computer buys" Thanks. Jvsullivan 19:35, 21 October 2006 (UTC)
The article claims that Apple uses the TPM chip for the Intel version of Mac OS X. This information seems to be false. See [ [1]]
From an inexperienced person perspective (hence why I'm reading a Wikipedia article on the subject!) there is a missing bit of information in the Endorsement Key section on how the signing of a random number proves the identity and validity of the TPM involved. I presume it is because the manufacturer or other trusted third party holds a copy of the public key and this is retrieved by the inquirer for the purpose of communication? If this or otherwise is the case I think it would be worthy of noting. Thanks. George Jenkins 21:22, 4 November 2006 (UTC)
Does anyone know how this is different from the P3 PSNs? I seem to remember that they didn't catch on.
I got a question for any experts on the subject: On the internet, there is an abundance of sources (all of them several years old, AFAIK) speaking of the Lucky Green patent incident. For underinformed laymen like me: The story goes something like he filed a patent on using T.C. for commercial purposes right after a conference where some Microsoft spokesperson talked about it, negating a commercial intent of Microsoft on the grounds of "we didn't even know it could be used for that". Sorry for any inaccuracies... I just wondered why there is no mention of that incident anywhere? Mostly I'd like to know how things finally turned out, because there don't seem to be any up to date sources. Kncyu38 14:47, 30 December 2006 (UTC)
I suggest deleting the whole section titled, "Alan Turing's Universal Computer and the Free Software Foundation". Someone added a paragraph that helps clarify it, but it is still contradictory and confusing. The only worthwhile thing in the section is mentioning the relation to DRM, but even that is better explained elsewhere in the article. Roger 00:13, 8 January 2007 (UTC)
Alan Cox concentrated on the issues of who owns the platform and who owns the key, neatly using Xbox as an example. If you own the keys, then you have the ability to do what you like with the systems you've bought. Your changing the software would clearly have an impact on the trustworthiness of the keys, and people who had established a trust relationship prior to the change would quite possibly then not trust you. So you just go back to them and establish a new relationship, cool, and Alan's happy with that.
But if you don't own sufficient keys to change the system, and somebody else has the rights to say what you can and cannot do with the system, then the system is, in Cox's view, inherently insecure. Which is the case with Xbox. Cox also points out that where you don't own the keys, then "a third party can say you trust your cable provider" (we suspect Cox's cable provider may be something of an issue for him). More seriously, keys could be interfered with in the name of national security, and even the possibility of this happening clearly destroys trust." —The preceding unsigned comment was added by 74.112.116.90 ( talk) 18:33, 13 January 2007 (UTC).
There's an amusing video http://www.lafkon.net/tc/ mirrored http://www.cs.bham.ac.uk/~mdr/teaching/modules06/security/video/trustedComputing.html and on youTube http://www.youtube.com/watch?v=K1H7omJW4TI -- Bah23 13:46, 6 February 2007 (UTC)
``Trusted computing... ¡cuando la confianza supera los límites!" (linked) is not in English. What's WP policy on this? I suspect the link should appear in the TC article written in that language?-- Bah23 16:40, 8 February 2007 (UTC)
I removed this:
This isn't correct. If you have TC features on your computer, then those features can be used to assure that you can trust your computer. Also, there is not necessarily any ability for others to be able to disable your computer. Roger 18:29, 4 March 2007 (UTC)
This article seems implicitly biased in support. In particular it makes no mention of the considerable controversy surrounding the issue in the introductiory paragraphs, deferring that until after masses of technical details have been unloaded. (It's overly long in any case.) Also it seems to very much play along with the rhetoric of the proponents (overuse of truisms essentially like: trusted computing is computers that are trustworthy and that have trustworthy components). The statements about spam have been widely discredited and certainly shouldn't appear unopposed as they do. It's an unenlightening greywash. I'm not well informed enough to redress the balance but I feel expert attention is needed. 172.203.199.18 17:49, 10 August 2007 (UTC)
"The Linux kernel has included trusted computing support since version 2.6.13, and there are several projects to implement trusted computing for Linux."
Link anyone? or reference ?
VmZH88AZQnCjhT40 ( talk) 05:00, 8 January 2011 (UTC)
It should be mentioned that the 2048-bit strong encryption requires significant processing power, which in turn means increased energy requirements. In the case of protected high-resolution videos this will mean a LOT of energy.
It's interesting to compare this with the current efforts to save the environment by not wasting energy. Nokia phones will warn you to unplug the charger when not used to eliminate its small quiescent current draw, while your computer will happily waste the double of the normally necessary power to play back a film. —Preceding unsigned comment added by 85.181.100.68 ( talk) 15:16, 31 October 2007 (UTC)
Leotohill ( talk) 01:31, 27 May 2009 (UTC)
Does the header "Possible uses of TC" mean that they haven't been implemented yet?
I just want to make sure that DRM isn't a function of TC yet. ( ZtObOr 03:45, 11 November 2007 (UTC))
(Sorry, forgot to add signature.)
More than a year ago I posted to the Cell_(microprocessor) talk page that the Cell processor had hardware DRM / Trusted Computing support and asking for some sort of coverage in the main article, and all that has come of it is a Talk Page accusation that this is "fantasia talk" and someone else all-too-typically sweeping it under a generic "security" rug and of course dismissing all Trusted Computing / DRM issues. However the fact of explicit DRM in the hardware is documented in the very title of IBM's own PDF publication on the Cell: "Cell Broadband Engine Support for Privacy Security and Digital Rights Management", and Trust design explicitly covered on the very first page. (IBM took down the PDF at the original link I had posted, but the Google link I just gave finds the paper at multiple other locations). I have read some other papers from IBM itself documenting crypto keys and crypto mechanisms embedded in the Cell chip, however I have been having a very difficult time locating adequate coverage and explanation on it. I have only a piecemeal understanding of the ultimate DRM/TrustedComputing implications of the design, and I do not feel confident writing it up in the main Cell article. Is there maybe anyone over here who happens to be familiar with these aspects of the Cell design who could add some mention of this issue to the Cell article? I hesitate to do a half-assed writeup myself, and I don't relish the prospect of digging around for enough technical documentation trying to to develop personal expertise on the Cell design. I already spend all too many hours studying the entire low level technical design of the TPM chip, chuckle. Alsee 19:33, 4 December 2007 (UTC)
i have a question : how many links should be in the "extenal links " section ? how many references can have an article ? is there a wikipedia guideline about it ? Dbiagioli ( talk) 20:57, 18 December 2007 (UTC)
The article includes this sentence: "Contrast Trusted Computing with secure computing in which anonymity, not disclosure, is the main concern." Clicking on the link to secure computing takes you to an article about computer security. However, anonymity is NOT the primary concern of secure computing as described there, and if "secure computing" is in fact a different concept forcused on anonymity, then I haven't found anything about it online.
Could someone please either explain or correct this? James mcl ( talk) 18:27, 31 January 2008 (UTC)
I do not know much about trusted computing, so please bear with me. I would like to understand what (if anything) trusted computing is about aside from its contraversial role in preventing unauthorized copying of music and programs. After reading the article, I still do not think I understand the whole story.
I have only seen three use cases for trusted computing that seem at all interesting: large-scale distributed computations ("grid computing") where the owner of the client machines should not necessarily be trusted, like SETI@home; DRM; and games. (I think the others described in this article require a counter-productive assumption that separate computers ought to be distinguishable. In the 'Protecting hard-drive data' example, what if the motherboard breaks so I must replace it and I need access to my data? In the ' Identity theft protection', what if I have to switch from one computer to another in the middle of a transaction, or what if the bank changes servers to deal with increased number of customers, or relocation? But I digress.) The first of these use cases (grid computing) is not emphasized at all in this article, but it sounds like just the sort of thing that would get a researching excited about ideas like trusted systems.
Did the model of trusted computing described in this article come from academic work, and what motivated it? What was the early history of its implementation like? What originally motivated manufacturers to include trusted computing chips, before this became such a big political issue in connection with copyright and trade secrets? Or was copyright the original motivation for the work?
I think it is the Endorsement key that people have a problem with and connect with the idea of trusted computing. After all, a trusted I/O path, memory protection, encrypted storage, and attestation of various facts by trusted parties are not new ideas and are often made possible through the (implicitly trusted) operating system. Why should we consider hardware more trustworthy than software? But the endorsement key means that a user does not have full specifications of all the relevant details of his computer and he has sacrificed some control for other benefits. Thus all the talk (in the "Possible applications" section) of the benefits of a trusted path, verification of identity, and digitial signatures did not seem to be very convincing or relevant. Am I missing something?
Projects like SETI@home face a real problem in the possibility of a malicious user ruining the whole project with fake data. It is really exciting if we have found a way to surmount that problem. Does trusted computing provide this? If so, how? These are the kind of questions I wished this article had answered!
Thanks for your work so far in maintaining this article on a contentious issue. 209.252.104.131 ( talk) 01:01, 26 April 2008 (UTC)
"^ Tony McFadden (March 26, 2006). TPM Matrix. Retrieved on 2006-05-05." http://www.tonymcfadden.net/tpmvendors.htm The page does not longer exists. I hope someone can find an equivalent reference. —Preceding unsigned comment added by 81.202.73.10 ( talk) 15:26, 25 May 2008 (UTC)
every Trusted Platform Module (TPM) is required to sign a random numb
This is NOT true. The endorsement key cannot be used for signing. —Preceding unsigned comment added by 147.188.192.41 ( talk) 13:43, 30 May 2008 (UTC)
Someone removed the text on how opponents/critics of Trusted Computing call it Treacherous Computing. Here are numerous sources: [5], and [6] many of which are reliable, backing up the use of this term in this context. It belongs on this page. Cazort ( talk) 15:05, 3 March 2009 (UTC)
I think it's not constructive to keep deleting and re-adding this text. The text needs to be sourced. In the absence of sources I think it's better to omit it than include it. However, I think some of the existing sources already included on the page actually back some of it up. We need to go through these sources and read them, and find out what of this paragraph can be backed up, by what sources...and if anything cannot (and I supect all of it can be backed up probably by existing sources) then it should be deleted. Cazort ( talk) 20:38, 9 March 2009 (UTC)
Protection of biometric authentication data
This argument is a straw man as from a security context biometric data is always public. It would be completely infeasible to prevent anyone from taking a picture of your face or finger, and forming a biometric model of you.
Scientus (
talk) 19:05, 2 May 2009 (UTC)
The Digital Rights Management is almost exactly repeated under possible application as well as criticisms. I'm not sure which should be removed/changed, but I think that we don't need the same information repeated twice. Luminite2 ( talk) 19:35, 5 May 2009 (UTC)
This section has some problematic claims:
1) "However, Microsoft has denied that any such functionality (virus protection) will be present in its NGSCB architecture. This needs a reference. Even then, MS's failure to include virus protection in NGSCB may have little bearing on whether TC can indeed be used for this purpose. Without further information, it's hard to know.
2) "In practice any operating system which aims to be backwards compatible with existing software will not be any safer from malicious code. [1] I can't find anything in Anderson's document that supports this. (Maybe I missed it?) I also find it dubious: closing ANY attack vector makes a system safer. It may not be "safe enough", but it is safer. If TC can be used to close any attack vector (can it?) then it can be used to make a system safer.
If we remove these claims, there's not much left for this section. I'd then be inclined to delete the whole thing unless we can find some more material for it. I'd like to see a description of how TC can be used to prevent malware infections.
Leotohill ( talk) 01:52, 27 May 2009 (UTC)
This section contains speculative claims. The sensitivity to machine configuration depends upon the characteristics of the software that is implementing restrictions. TPM provides multiple registers that record different aspects of the configuration. A process that uses the TPM to provide security can choose which of these it cares to be sensitive to. Since there is no current DRM scheme that uses TPM, any claims about its sensitivity to machine configuration, beyond the TPM root key, are speculative. Leotohill ( talk) 00:16, 30 May 2009 (UTC)
Does anyone care to refute that the current TCG TPM architecture is incapable of being used to enforce machine configuration? As I see it, to implement this on a typical commercial operating system, you would need a chain of trust through the entire privileged access and execution chain until the configuration is verified, which is impossible prima facie given the heterogeneity of systems in use. I think the article would benefit from the addition of material supporting that the current state of technology as it interacts with x86 and modern moperating systems cannot support DRM enforcement with the aim of preventing a theoretical digital capture of protected content. VmZH88AZQnCjhT40 ( talk) 02:47, 11 February 2011 (UTC)
I don't know where these "key concepts" came from, but some are incorrect.
1) the text stated that Secure I/O is a key concept, but it isn't mentioned in the spec. I've removed it from the article.
2) nor does the TPM spec mention curtained memory. Nor do any of the other references listed below. A web search will find a number of documents that refer to curtained memory and the TPM, I haven't found any that provide a definitive reference. These may have come from early design proposals for a TPM that didn't make the final version of the spec.
The TPM spec does describe "shielded location" which "is a place (memory, register, etc.) where data is protected against interference and exposure, independent of its form. However, unlike Intel's Trusted Execution Technology and other general discussion of memory curtaining, a shielded location is not owned by a process, and is not directly accessible by the CPU. In the TPM, the content of a shielded location is only available via the purpose-specific commands that the TPM provides.
The TPM overview document also describes "protected storage" which I read as a special case of "shielded location". Again, it is not memory that is accessible by the CPU.
So I see the term "curtained memory" as incorrect here, and I'm inclined to edit out the references to it, unless someone can provide a better answer.
These are the references I searched for "curtain" with no hits:
http://www.trustedcomputinggroup.org/files/resource_files/AC652DE1-1D09-3519-ADA026A0C05CFAC2/TCG_1_4_Architecture_Overview.pdf http://www.trustedcomputinggroup.org/resources/tpm_specification_version_12_revision_103_part_1__3 http://www.amazon.com/Practical-Guide-Trusted-Computing/dp/0132398427/ref=pd_bxgy_b_text_b# http://www.amazon.com/gp/product/0750679603/ref=s9_cart_gw_tr02?pf_rd_m=ATVPDKIKX0DER&pf_rd_s=center-4&pf_rd_r=0F6ZXMXAM6BSTS9QEV0P&pf_rd_t=101&pf_rd_p=470939031&pf_rd_i=507846# http://www.amazon.com/Trusted-Computing-Platforms-Design-Applications/dp/0387239162/ref=pd_bxgy_b_img_c#reader
I plan to rewrite this section as soon as I feel comfortable with the material - but I encourage others to go ahead.
Leotohill (
talk) 15:28, 19 June 2009 (UTC)
and further revised
Leotohill (
talk) 02:12, 28 June 2009 (UTC)
The start of the article defines Trusted Computing as what the Trusted Computing Group does. Given that definition, the sub-section Trust says "In some proposed encryption-decryption chips...", thus surely that text must be out of scope. It is also full of citation needed marks, so I will delete that portion.
Dids (
talk) 05:09, 27 October 2009 (UTC)
I volunteer to contribute a significant amount of new text, replacing all existing text up to the section “Known applications” and leaving the rest (including the criticism) untouched. I was recently told of the current page and was struck that it is sketchy or silent on many aspects of trusted computing. I can provide a description of Trusted Computing that has been honed over 11 years of designing and describing the technology. My standard spiel mentions what technology exists (and some of what doesn’t), the real reason why it’s called “Trusted Computing”, the way that Trusted Computing protects data (plus its security properties), the essential difference from secure computing, and what Trusted Computing insiders consider to be the core principles of the technology. I (obviously) believe that Trusted Computing is a “good thing” but every technology has aspects that can be difficult to implement, and lots of technologies can be used in undesirable ways. My spiel therefore also mentions difficulties with the technology, and describes the concerns that have been told to Trusted Computing insiders over the years, plus the status of attempts to resolve those concerns. According to Wikipedia’s help pages, someone in my situation should discuss my intentions here before editing the main page, so here I am. I’m in no way a Wikipedia expert. What’s next? Walklooker ( talk) 17:50, 20 January 2010 (UTC)
I've posted draft text at User:Walklooker/draft for `trusted computing' Walklooker ( talk) 11:48, 22 January 2010 (UTC)
It was unproductive to `undo’ the revamp. The reinstated version confuses, contains factual errors, is sparse and patchy on actual trusted computing, and does not mention all the core properties of trusted computing or distinguish them from optional properties or specific implementations. Further, the reinstated version has a systemic fault, in that the description of trusted computing is dominated by a description of a classical DRM system that uses trusted computing as a system component to restrict distribution of data. This engenders confusion because trusted computing does not specify that platforms can’t distribute data and actually includes mechanisms to assist the distribution of data. Any DRM description involving trusted computing should be in the DRM subsection of the `Trusted Computing applications’ section, or even on the Wikipedia page describing Digital Rights Management. The trusted computing description should instead describe trusted computing, which is a tool for protecting data, designed on the principle that there is no such thing as universally preferred behavior. Trusted computing provides a form of access control that helps enable the selection of preferred behaviors, and helps enforce a selection if one is made, but doesn’t dictate behavior, or even insist that a selection must be made. In other words, trusted computing does not fix behavior but does protect data by allowing the association of data with any arbitrary behavior, and includes methods to enable the determination of current behavior. This enables protection of data in a wide spectrum of tasks, such as performing one’s business activities, performing personal activities, viewing one’s financial information, viewing one’s medical information, accessing different services or being admitted to sensitive networks, and even just casual Internet browsing. What needs to be explained is what trusted computing is, how it works, and the state of the art, and that is what the revamp did.
Here are examples of concerns with the current (reinstated) version:
• "The term is taken from the field of trusted systems". Actually trusted computing is called trusted computing because it is a technological implementation of a trust process, not because it must simply be trusted.
• "With Trusted Computing, the computer will consistently behave in expected ways, and those behaviors will be enforced by hardware and software". Actually trusted computing mechanisms enable any selection of expected behavior to be enforced but do not check or constrain the selected behavior. Hopefully a selected behavior is consistent, and the computer will consistently behave in expected ways, but trusted computing can’t change the behavior of software, or compensate for faults in software.
• "Enforcing this behavior is achieved by loading the hardware with a unique encryption key inaccessible to the rest of the system". It's hard to be sure what this means. Trusted computing uses cryptography to help enforce a selected behavior, but encryption keys need not be unique and might be loaded or generated locally on the trusted computing hardware.
• "The main functionality of TC is to ensure that only authorized code runs on a system". This is far too simplistic. Trusted computing has no way to constrain the software that can execute on a computer. Even when trusted computing mechanisms are used, there is nothing in trusted computing to constrain the choice of software or behavior that must be associated with data. Anyone with unrestricted access to plain-text data can associate any software with that data using trusted computing.
• "However uncooperative operating systems do can misuse security features to prevent legitimate data exchange!" This comment could apply to any type of computer. Trusted computing does not encourage this type of behavior.
• The current description describes in multiple places the properties of a classical DRM system that uses trusted computing as a system component. This should be described in the DRM example, and explained as an application of trusted computing, namely that a third party associates third party data with a behavior that restricts the distribution of data. The DRM example should also explain that that a third party cannot discover or use the trusted computing mechanisms without permission from the computer’s owner.
• "Trusted computing encompasses six key technology concepts, of which all are required for a fully Trusted system". This confuses a DRM system with trusted computing. Further, trusted computing has three core concepts, one of which is not even mentioned in the reinstated description. In contrast, "secure input and output" is not an essential part of trusted computing and "memory curtaining" is a desirable but not essential component of a trusted computer (which needs isolation mechanisms); it may be present in some implementations and absent in others.
• "every Trusted Platform Module (TPM) is required to sign a random number". The accurate statement would be that every Trusted Platform Module (TPM) is required to be able to sign. Trusted platforms never need to do this unless the computer owner decides to reveal that they have a genuine trusted computer.
• "[EK] makes it impossible for a software TPM emulator, with a self-generated Endorsement Key, to start a secure transaction with a trusted entity". Actually nothing prevents the use of a software TPM with its own EK for a secure transaction with a trusted entity, if the entity trusts the software TPM. The EK in a hardware TPM just makes it impossible for a TPM emulator to pretend to be that hardware TPM.
• "[Sealed Storage] means the data can be read only by the same combination of software and hardware". This confuses the properties of trusted computing with that of a DRM system. Sealed Storage releases data to a particular combination of software and hardware, but that software can propagate the data in any way it chooses, and trusted computing provides mechanisms to help propagate the data. Hence data protected by trusted computing could be read by other software and hardware, or just by one combination of hardware and software.
• "This will prevent people from buying a new computer, or upgrading parts of their current one except after explicit permission of the vendor of the old computer." This confuses the properties of trusted computing with alleged effects of DRM systems. In trusted computing, the only upgrades that require permission from an OEM are upgrades to the trusted computing mechanisms that enforce reliable selection of behavior.
• "Remote attestation allows changes to the user's computer to be detected by authorized parties". This is overly simplistic. Remote attestation allows changes to the user's computer to be reported by authorized parties, such as the computer owner. It’s the authorized party that authorizes reporting who approves the receipt of attestation information by remote parties.
• "The computer can then present this certificate to a remote party to show that its software has not been tampered with." This is inaccurate, because the `certificate' only shows that that unaltered software is currently executing, not whether software has been tampered with.
• "Remote attestation is usually combined with public-key encryption so that the information sent can only be read by the programs that presented and requested the attestation, and not by an eavesdropper, such as the computer owner." This is at minimum misleading and arguably incorrect. There is no mechanism in trusted computing to prevent the owner obtaining an attestation value. In fact attestation cannot operate without permission from the computer owner. Encrypted attestation serves only to protect the owner's privacy from eavesdroppers. If the owner does not want privacy, the attestation value need not be encrypted.
• "secure I/O prevents the user from recording it as it is transmitted to the audio subsystem". This confuses the properties of trusted computing with that of a DRM system. There is no mechanism in trusted computing to prevent the user from recording an output.
• "remote attestation protects it from unauthorized software even when it is used on other computers". It's hard to be sure what this means. It might mean that remote attestation can be used to ensure that a trusted computer will respect the data that it is accessing on a remote computer.
• "Remote Attestation use, however, has been discouraged in favour of Direct anonymous attestation". It should be made clear that DAA is a substitute for a Trusted Third Party, not a substitute for remote attestation. An additional complication is that DAA can be used with a Trusted Third Party.
• The current description states in multiple places that a CA generates an AIK. This is incorrect. In fact the TPM generates AIKs and a CA provides certificates for AIKs.
• "These three credentials will in short be referred to as "EK". The EK is a platform specific key that uniquely identifies the platform." These statements are contradictory and unhelpful because, in trusted computing, the EK is a key and is not a certificate or three certificates.
• "The EKpub will uniquely identify the endorser of the platform, model, what kind of software is currently being used on the platform, details of the TPM, and that the platform (PC) complies with the TCG specifications". This is confusing because various certificates, which reference the EKpub, describe the platform properties. Also, these certificates do not identify what kind of software is currently being used on the platform.
• "Allegedly, this will provide the user with anonymity". In fact it will provide pseudonymity unless each AIK is used only once.
• "If the Verifier accepts the DAA supplied it will produce an AIK" Again, it’s the TPM that produces AIKs.
• "If the anonymity of the user as a whole will be increased by the new version is another question". There's mathematical evidence that DAA anonymizes the TPM. All that trusted computing can try to do is not make anonymity any worse than it already is (because of other factors).
• "One can easily question the objectives of the Issuer, since this most commonly will be the platform manufacturer." The issuance of an EK and EK credential (or a DAA credential) by an OEM is a value-added service provided by the OEM to their customers. Without such credentials, it’s difficult for a private customer or small business or organisation to convince others that they have a genuine trusted platform. (A famous company or organisation could produce its own credentials and reply upon its brand to convince others that they have a genuine trusted platform.) There's mathematical evidence that an Issuer can't use DAA to obtain or deduce any linkage between a specific trusted computer and the usage of that computer.
• "Another key question is what kind of information will be supplied to the Issuer in order to obtain the DAA credentials". The answer to this question is that the DAA protocol uses the pubEK (and by implication) the EK certificate to create DAA credentials. Walklooker ( talk) 10:32, 4 April 2010 (UTC)
Thank you for making constructive comments. I actually did try simple editing (on a local copy of the Wikipedia page) and suggested the revamp because I ended up with more new text than original text. If we start again, the structural changes would be removal of the confusion with a DRM system (perhaps putting that information into the `DRM application' section), followed by corrections and new information. So there will still eventually be large alterations. Would you be prepared to help with this? Walklooker ( talk) 09:23, 20 April 2010 (UTC)
I bought a new computer in 2012 that did not come with this, so article is wrong. -- 209.188.32.201 ( talk) 17:17, 12 July 2013 (UTC)
The 'Trusted third party' section in this article has had a 'needs cleanup' tag on it since May 2010 because it has no citations and is not written in an encyclopedic tone. I'm replacing this entire section with a link to Trusted third party, because that article appears to cover the topic much more thoroughly and more properly. It would be nice for the 'Trusted third party' section here to have a brief summary of the topic in addition to the link, but I wasn't able to learn enough from that other article to summarize it in a useful way. - Brian Kendig ( talk) 17:26, 7 January 2024 (UTC)
A substantial portion of this article is devoted to speculation about potential applications of computer security.
Speculation about potential application for new technologies isn't knowledge. 99 44/100ths% of it is as ephemeral as a soap bubble and yet less captivating and entertaining.
Its enough to say that the cost of being unable to prevent altered and malicious software from interfering with the intended function of everything from industrial machines to music players is sufficient to inspire innovators to apply every available tool, including trusted computing initiatives.
A final reason not to put speculative applications here is that experience has shown that security concepts simple enough to explain in a few sentences have proven to be easily defeated. Application proposals sufficiently complex to worth knowing about will also be long enough to merit a Wikipedia article of their own.
As an editorial comment on the pro/con optimist/defeatist saga here; computing technologies have been proposing and speculatively promising more than could be delivered in a generation since the 1950's. Like the bubbles on top of a glass of champagne, without them it would be much less, but they don't really add anything. TPM is the most recent in a long line of timely initiatives to increase the difficulty of defeating computer security. Its good and its helpful but without being able to prevent physical access to the platform it remains the same problem as communicating over an open channel. The time and cost to intercept and alter can be increased, but never insurmountably. PolychromePlatypus ( talk) 22:15, 5 May 2024 (UTC)
This is the
talk page for discussing improvements to the
Trusted Computing article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google ( books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
Archives: 1, 2, 3 |
Trusted Computing is a former featured article candidate. Please view the links under Article milestones below to see why the nomination failed. For older candidates, please check the archive. | ||||||||||
|
This article is rated C-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||||||||||||||||||||||||||
|
|
|||
The problem with this article is that it has weasel words. Here's some examples.
2nd paragraph
3rd paragraph
The nature of trust
There's still more, but this is enough to warrant the {{weasel}} tag.
please see my reasons and discuss at Talk:Trustworthy_Computing#reasons for proposed merge. ObsidianOrder
I didn't put the link in originally but I replaced it, they are actually pretty central to this field, they own many patents in the area and have developed things like trusted keyboards &ct. -- Gorgonzilla 20:41, 8 June 2006 (UTC)
Given the current article which equates trusted computing with the TCG which is in turn a commercial consortium it is somewhat difficult to understand that position. Not that I can quite see why having made the decision TCG is the only game in town that there should be a separate article. I admit I have not done a lot in TCG and only attended one meeting, the very first. But there are a lot more games in the trusted/trustworthy space. -- Gorgonzilla 00:52, 10 June 2006 (UTC)
Wow, check out the Italian version of this article, which got FA status. I don't speak a word of the language, but perhaps there is an opportunity here to improve the English-language version of the article? -/- Warren 18:03, 15 July 2006 (UTC)
There appears to be a heavy bias towards a paranoid viewpoint espoused by a small minority in the "Disputed issues" section which lends undue credit to fearmongers. This portion of the article should either be substantially shortened (and "what-if" clauses removed) or the responses of competent professionals who have denounced such myths should be added. —Preceding unsigned comment added by 71.98.89.174 ( talk • contribs)
I hope I'm not screwing up someone's legit system, but I'm going through and removing a lot of non-grammatical spaces (i.e. a space before a punctuation mark or double/triple spaces b/w words. Sorry if this is any problem -- Gbinal 01:44, 15 August 2006 (UTC)
The section on "Proposed owner override for TC" seems to be just another disputed issue. Some people think that it is a good idea, and some people don't. I suggest putting it with the Disputed Issues. Also, it is written with a very anti-TC POV. It complains that the TC folks have refused to scrap a feature in order to please the anti-TC folks. Yes, that's right, just like they disagree on other issues. It should just describe the dispute. Roger 01:29, 23 September 2006 (UTC)
Add a section on the partisan objections of RMS and the GNU project as well as from the Free Software Foundation on how they claim this impurifies Alan Turing's Universal computer Theory -- That is a computer is a machine that can do the same function as any other existing machine (printing press, fax, polygraph, cassette tapes, records, radio, television, etc) and how trusted computing can possibly limit the computers' abilities to do these things.
Thanks, -- Mofomojo 06:10, 27 September 2006 (UTC)
``The U.S. army has also stated that every new PC bought by the army must support trusted computing [3]" - the referenced article DOES NOT state this.
You write at footnote 4 that "the link [no longer states] that pc must have a TPM." That doesn't mean that the Army dropped the requirement, does it? 10/21/2006 Jvsullivan 19:26, 6 November 2006 (UTC)
I don't mean to be trouble to you, but you're making the affirmative assertion that the requirement was dropped. To base that affirmative assertion on the absence of evidence that it wasn't dropped doesn't seem very encyclopedic. If I come across a reiteration of the requirement, I'll certainly point it out. But I think you should reconsider characterizing the absence of a reiteration as a reversal. Thanks for your attention. Jvsullivan 17:32, 21 October 2006 (UTC)
"We have to suppose"? The cited source says nothing to support the proposition that the requirement was dropped. What is it about an FAQ page that happens not to mention the continued existence of the requirement that compels publication of a supposition that the requirement has been dropped? This isn't adding up. Please take a look at the list accomplishments under strategic goal 3 in this October 2006 Army publication: http://www.army.mil/ciog6/news/500Day2006Update.pdf : "Require Trusted Platform Module (TPM) 1.2 for new computer buys" Thanks. Jvsullivan 19:35, 21 October 2006 (UTC)
The article claims that Apple uses the TPM chip for the Intel version of Mac OS X. This information seems to be false. See [ [1]]
From an inexperienced person perspective (hence why I'm reading a Wikipedia article on the subject!) there is a missing bit of information in the Endorsement Key section on how the signing of a random number proves the identity and validity of the TPM involved. I presume it is because the manufacturer or other trusted third party holds a copy of the public key and this is retrieved by the inquirer for the purpose of communication? If this or otherwise is the case I think it would be worthy of noting. Thanks. George Jenkins 21:22, 4 November 2006 (UTC)
Does anyone know how this is different from the P3 PSNs? I seem to remember that they didn't catch on.
I got a question for any experts on the subject: On the internet, there is an abundance of sources (all of them several years old, AFAIK) speaking of the Lucky Green patent incident. For underinformed laymen like me: The story goes something like he filed a patent on using T.C. for commercial purposes right after a conference where some Microsoft spokesperson talked about it, negating a commercial intent of Microsoft on the grounds of "we didn't even know it could be used for that". Sorry for any inaccuracies... I just wondered why there is no mention of that incident anywhere? Mostly I'd like to know how things finally turned out, because there don't seem to be any up to date sources. Kncyu38 14:47, 30 December 2006 (UTC)
I suggest deleting the whole section titled, "Alan Turing's Universal Computer and the Free Software Foundation". Someone added a paragraph that helps clarify it, but it is still contradictory and confusing. The only worthwhile thing in the section is mentioning the relation to DRM, but even that is better explained elsewhere in the article. Roger 00:13, 8 January 2007 (UTC)
Alan Cox concentrated on the issues of who owns the platform and who owns the key, neatly using Xbox as an example. If you own the keys, then you have the ability to do what you like with the systems you've bought. Your changing the software would clearly have an impact on the trustworthiness of the keys, and people who had established a trust relationship prior to the change would quite possibly then not trust you. So you just go back to them and establish a new relationship, cool, and Alan's happy with that.
But if you don't own sufficient keys to change the system, and somebody else has the rights to say what you can and cannot do with the system, then the system is, in Cox's view, inherently insecure. Which is the case with Xbox. Cox also points out that where you don't own the keys, then "a third party can say you trust your cable provider" (we suspect Cox's cable provider may be something of an issue for him). More seriously, keys could be interfered with in the name of national security, and even the possibility of this happening clearly destroys trust." —The preceding unsigned comment was added by 74.112.116.90 ( talk) 18:33, 13 January 2007 (UTC).
There's an amusing video http://www.lafkon.net/tc/ mirrored http://www.cs.bham.ac.uk/~mdr/teaching/modules06/security/video/trustedComputing.html and on youTube http://www.youtube.com/watch?v=K1H7omJW4TI -- Bah23 13:46, 6 February 2007 (UTC)
``Trusted computing... ¡cuando la confianza supera los límites!" (linked) is not in English. What's WP policy on this? I suspect the link should appear in the TC article written in that language?-- Bah23 16:40, 8 February 2007 (UTC)
I removed this:
This isn't correct. If you have TC features on your computer, then those features can be used to assure that you can trust your computer. Also, there is not necessarily any ability for others to be able to disable your computer. Roger 18:29, 4 March 2007 (UTC)
This article seems implicitly biased in support. In particular it makes no mention of the considerable controversy surrounding the issue in the introductiory paragraphs, deferring that until after masses of technical details have been unloaded. (It's overly long in any case.) Also it seems to very much play along with the rhetoric of the proponents (overuse of truisms essentially like: trusted computing is computers that are trustworthy and that have trustworthy components). The statements about spam have been widely discredited and certainly shouldn't appear unopposed as they do. It's an unenlightening greywash. I'm not well informed enough to redress the balance but I feel expert attention is needed. 172.203.199.18 17:49, 10 August 2007 (UTC)
"The Linux kernel has included trusted computing support since version 2.6.13, and there are several projects to implement trusted computing for Linux."
Link anyone? or reference ?
VmZH88AZQnCjhT40 ( talk) 05:00, 8 January 2011 (UTC)
It should be mentioned that the 2048-bit strong encryption requires significant processing power, which in turn means increased energy requirements. In the case of protected high-resolution videos this will mean a LOT of energy.
It's interesting to compare this with the current efforts to save the environment by not wasting energy. Nokia phones will warn you to unplug the charger when not used to eliminate its small quiescent current draw, while your computer will happily waste the double of the normally necessary power to play back a film. —Preceding unsigned comment added by 85.181.100.68 ( talk) 15:16, 31 October 2007 (UTC)
Leotohill ( talk) 01:31, 27 May 2009 (UTC)
Does the header "Possible uses of TC" mean that they haven't been implemented yet?
I just want to make sure that DRM isn't a function of TC yet. ( ZtObOr 03:45, 11 November 2007 (UTC))
(Sorry, forgot to add signature.)
More than a year ago I posted to the Cell_(microprocessor) talk page that the Cell processor had hardware DRM / Trusted Computing support and asking for some sort of coverage in the main article, and all that has come of it is a Talk Page accusation that this is "fantasia talk" and someone else all-too-typically sweeping it under a generic "security" rug and of course dismissing all Trusted Computing / DRM issues. However the fact of explicit DRM in the hardware is documented in the very title of IBM's own PDF publication on the Cell: "Cell Broadband Engine Support for Privacy Security and Digital Rights Management", and Trust design explicitly covered on the very first page. (IBM took down the PDF at the original link I had posted, but the Google link I just gave finds the paper at multiple other locations). I have read some other papers from IBM itself documenting crypto keys and crypto mechanisms embedded in the Cell chip, however I have been having a very difficult time locating adequate coverage and explanation on it. I have only a piecemeal understanding of the ultimate DRM/TrustedComputing implications of the design, and I do not feel confident writing it up in the main Cell article. Is there maybe anyone over here who happens to be familiar with these aspects of the Cell design who could add some mention of this issue to the Cell article? I hesitate to do a half-assed writeup myself, and I don't relish the prospect of digging around for enough technical documentation trying to to develop personal expertise on the Cell design. I already spend all too many hours studying the entire low level technical design of the TPM chip, chuckle. Alsee 19:33, 4 December 2007 (UTC)
i have a question : how many links should be in the "extenal links " section ? how many references can have an article ? is there a wikipedia guideline about it ? Dbiagioli ( talk) 20:57, 18 December 2007 (UTC)
The article includes this sentence: "Contrast Trusted Computing with secure computing in which anonymity, not disclosure, is the main concern." Clicking on the link to secure computing takes you to an article about computer security. However, anonymity is NOT the primary concern of secure computing as described there, and if "secure computing" is in fact a different concept forcused on anonymity, then I haven't found anything about it online.
Could someone please either explain or correct this? James mcl ( talk) 18:27, 31 January 2008 (UTC)
I do not know much about trusted computing, so please bear with me. I would like to understand what (if anything) trusted computing is about aside from its contraversial role in preventing unauthorized copying of music and programs. After reading the article, I still do not think I understand the whole story.
I have only seen three use cases for trusted computing that seem at all interesting: large-scale distributed computations ("grid computing") where the owner of the client machines should not necessarily be trusted, like SETI@home; DRM; and games. (I think the others described in this article require a counter-productive assumption that separate computers ought to be distinguishable. In the 'Protecting hard-drive data' example, what if the motherboard breaks so I must replace it and I need access to my data? In the ' Identity theft protection', what if I have to switch from one computer to another in the middle of a transaction, or what if the bank changes servers to deal with increased number of customers, or relocation? But I digress.) The first of these use cases (grid computing) is not emphasized at all in this article, but it sounds like just the sort of thing that would get a researching excited about ideas like trusted systems.
Did the model of trusted computing described in this article come from academic work, and what motivated it? What was the early history of its implementation like? What originally motivated manufacturers to include trusted computing chips, before this became such a big political issue in connection with copyright and trade secrets? Or was copyright the original motivation for the work?
I think it is the Endorsement key that people have a problem with and connect with the idea of trusted computing. After all, a trusted I/O path, memory protection, encrypted storage, and attestation of various facts by trusted parties are not new ideas and are often made possible through the (implicitly trusted) operating system. Why should we consider hardware more trustworthy than software? But the endorsement key means that a user does not have full specifications of all the relevant details of his computer and he has sacrificed some control for other benefits. Thus all the talk (in the "Possible applications" section) of the benefits of a trusted path, verification of identity, and digitial signatures did not seem to be very convincing or relevant. Am I missing something?
Projects like SETI@home face a real problem in the possibility of a malicious user ruining the whole project with fake data. It is really exciting if we have found a way to surmount that problem. Does trusted computing provide this? If so, how? These are the kind of questions I wished this article had answered!
Thanks for your work so far in maintaining this article on a contentious issue. 209.252.104.131 ( talk) 01:01, 26 April 2008 (UTC)
"^ Tony McFadden (March 26, 2006). TPM Matrix. Retrieved on 2006-05-05." http://www.tonymcfadden.net/tpmvendors.htm The page does not longer exists. I hope someone can find an equivalent reference. —Preceding unsigned comment added by 81.202.73.10 ( talk) 15:26, 25 May 2008 (UTC)
every Trusted Platform Module (TPM) is required to sign a random numb
This is NOT true. The endorsement key cannot be used for signing. —Preceding unsigned comment added by 147.188.192.41 ( talk) 13:43, 30 May 2008 (UTC)
Someone removed the text on how opponents/critics of Trusted Computing call it Treacherous Computing. Here are numerous sources: [5], and [6] many of which are reliable, backing up the use of this term in this context. It belongs on this page. Cazort ( talk) 15:05, 3 March 2009 (UTC)
I think it's not constructive to keep deleting and re-adding this text. The text needs to be sourced. In the absence of sources I think it's better to omit it than include it. However, I think some of the existing sources already included on the page actually back some of it up. We need to go through these sources and read them, and find out what of this paragraph can be backed up, by what sources...and if anything cannot (and I supect all of it can be backed up probably by existing sources) then it should be deleted. Cazort ( talk) 20:38, 9 March 2009 (UTC)
Protection of biometric authentication data
This argument is a straw man as from a security context biometric data is always public. It would be completely infeasible to prevent anyone from taking a picture of your face or finger, and forming a biometric model of you.
Scientus (
talk) 19:05, 2 May 2009 (UTC)
The Digital Rights Management is almost exactly repeated under possible application as well as criticisms. I'm not sure which should be removed/changed, but I think that we don't need the same information repeated twice. Luminite2 ( talk) 19:35, 5 May 2009 (UTC)
This section has some problematic claims:
1) "However, Microsoft has denied that any such functionality (virus protection) will be present in its NGSCB architecture. This needs a reference. Even then, MS's failure to include virus protection in NGSCB may have little bearing on whether TC can indeed be used for this purpose. Without further information, it's hard to know.
2) "In practice any operating system which aims to be backwards compatible with existing software will not be any safer from malicious code. [1] I can't find anything in Anderson's document that supports this. (Maybe I missed it?) I also find it dubious: closing ANY attack vector makes a system safer. It may not be "safe enough", but it is safer. If TC can be used to close any attack vector (can it?) then it can be used to make a system safer.
If we remove these claims, there's not much left for this section. I'd then be inclined to delete the whole thing unless we can find some more material for it. I'd like to see a description of how TC can be used to prevent malware infections.
Leotohill ( talk) 01:52, 27 May 2009 (UTC)
This section contains speculative claims. The sensitivity to machine configuration depends upon the characteristics of the software that is implementing restrictions. TPM provides multiple registers that record different aspects of the configuration. A process that uses the TPM to provide security can choose which of these it cares to be sensitive to. Since there is no current DRM scheme that uses TPM, any claims about its sensitivity to machine configuration, beyond the TPM root key, are speculative. Leotohill ( talk) 00:16, 30 May 2009 (UTC)
Does anyone care to refute that the current TCG TPM architecture is incapable of being used to enforce machine configuration? As I see it, to implement this on a typical commercial operating system, you would need a chain of trust through the entire privileged access and execution chain until the configuration is verified, which is impossible prima facie given the heterogeneity of systems in use. I think the article would benefit from the addition of material supporting that the current state of technology as it interacts with x86 and modern moperating systems cannot support DRM enforcement with the aim of preventing a theoretical digital capture of protected content. VmZH88AZQnCjhT40 ( talk) 02:47, 11 February 2011 (UTC)
I don't know where these "key concepts" came from, but some are incorrect.
1) the text stated that Secure I/O is a key concept, but it isn't mentioned in the spec. I've removed it from the article.
2) nor does the TPM spec mention curtained memory. Nor do any of the other references listed below. A web search will find a number of documents that refer to curtained memory and the TPM, I haven't found any that provide a definitive reference. These may have come from early design proposals for a TPM that didn't make the final version of the spec.
The TPM spec does describe "shielded location" which "is a place (memory, register, etc.) where data is protected against interference and exposure, independent of its form. However, unlike Intel's Trusted Execution Technology and other general discussion of memory curtaining, a shielded location is not owned by a process, and is not directly accessible by the CPU. In the TPM, the content of a shielded location is only available via the purpose-specific commands that the TPM provides.
The TPM overview document also describes "protected storage" which I read as a special case of "shielded location". Again, it is not memory that is accessible by the CPU.
So I see the term "curtained memory" as incorrect here, and I'm inclined to edit out the references to it, unless someone can provide a better answer.
These are the references I searched for "curtain" with no hits:
http://www.trustedcomputinggroup.org/files/resource_files/AC652DE1-1D09-3519-ADA026A0C05CFAC2/TCG_1_4_Architecture_Overview.pdf http://www.trustedcomputinggroup.org/resources/tpm_specification_version_12_revision_103_part_1__3 http://www.amazon.com/Practical-Guide-Trusted-Computing/dp/0132398427/ref=pd_bxgy_b_text_b# http://www.amazon.com/gp/product/0750679603/ref=s9_cart_gw_tr02?pf_rd_m=ATVPDKIKX0DER&pf_rd_s=center-4&pf_rd_r=0F6ZXMXAM6BSTS9QEV0P&pf_rd_t=101&pf_rd_p=470939031&pf_rd_i=507846# http://www.amazon.com/Trusted-Computing-Platforms-Design-Applications/dp/0387239162/ref=pd_bxgy_b_img_c#reader
I plan to rewrite this section as soon as I feel comfortable with the material - but I encourage others to go ahead.
Leotohill (
talk) 15:28, 19 June 2009 (UTC)
and further revised
Leotohill (
talk) 02:12, 28 June 2009 (UTC)
The start of the article defines Trusted Computing as what the Trusted Computing Group does. Given that definition, the sub-section Trust says "In some proposed encryption-decryption chips...", thus surely that text must be out of scope. It is also full of citation needed marks, so I will delete that portion.
Dids (
talk) 05:09, 27 October 2009 (UTC)
I volunteer to contribute a significant amount of new text, replacing all existing text up to the section “Known applications” and leaving the rest (including the criticism) untouched. I was recently told of the current page and was struck that it is sketchy or silent on many aspects of trusted computing. I can provide a description of Trusted Computing that has been honed over 11 years of designing and describing the technology. My standard spiel mentions what technology exists (and some of what doesn’t), the real reason why it’s called “Trusted Computing”, the way that Trusted Computing protects data (plus its security properties), the essential difference from secure computing, and what Trusted Computing insiders consider to be the core principles of the technology. I (obviously) believe that Trusted Computing is a “good thing” but every technology has aspects that can be difficult to implement, and lots of technologies can be used in undesirable ways. My spiel therefore also mentions difficulties with the technology, and describes the concerns that have been told to Trusted Computing insiders over the years, plus the status of attempts to resolve those concerns. According to Wikipedia’s help pages, someone in my situation should discuss my intentions here before editing the main page, so here I am. I’m in no way a Wikipedia expert. What’s next? Walklooker ( talk) 17:50, 20 January 2010 (UTC)
I've posted draft text at User:Walklooker/draft for `trusted computing' Walklooker ( talk) 11:48, 22 January 2010 (UTC)
It was unproductive to `undo’ the revamp. The reinstated version confuses, contains factual errors, is sparse and patchy on actual trusted computing, and does not mention all the core properties of trusted computing or distinguish them from optional properties or specific implementations. Further, the reinstated version has a systemic fault, in that the description of trusted computing is dominated by a description of a classical DRM system that uses trusted computing as a system component to restrict distribution of data. This engenders confusion because trusted computing does not specify that platforms can’t distribute data and actually includes mechanisms to assist the distribution of data. Any DRM description involving trusted computing should be in the DRM subsection of the `Trusted Computing applications’ section, or even on the Wikipedia page describing Digital Rights Management. The trusted computing description should instead describe trusted computing, which is a tool for protecting data, designed on the principle that there is no such thing as universally preferred behavior. Trusted computing provides a form of access control that helps enable the selection of preferred behaviors, and helps enforce a selection if one is made, but doesn’t dictate behavior, or even insist that a selection must be made. In other words, trusted computing does not fix behavior but does protect data by allowing the association of data with any arbitrary behavior, and includes methods to enable the determination of current behavior. This enables protection of data in a wide spectrum of tasks, such as performing one’s business activities, performing personal activities, viewing one’s financial information, viewing one’s medical information, accessing different services or being admitted to sensitive networks, and even just casual Internet browsing. What needs to be explained is what trusted computing is, how it works, and the state of the art, and that is what the revamp did.
Here are examples of concerns with the current (reinstated) version:
• "The term is taken from the field of trusted systems". Actually trusted computing is called trusted computing because it is a technological implementation of a trust process, not because it must simply be trusted.
• "With Trusted Computing, the computer will consistently behave in expected ways, and those behaviors will be enforced by hardware and software". Actually trusted computing mechanisms enable any selection of expected behavior to be enforced but do not check or constrain the selected behavior. Hopefully a selected behavior is consistent, and the computer will consistently behave in expected ways, but trusted computing can’t change the behavior of software, or compensate for faults in software.
• "Enforcing this behavior is achieved by loading the hardware with a unique encryption key inaccessible to the rest of the system". It's hard to be sure what this means. Trusted computing uses cryptography to help enforce a selected behavior, but encryption keys need not be unique and might be loaded or generated locally on the trusted computing hardware.
• "The main functionality of TC is to ensure that only authorized code runs on a system". This is far too simplistic. Trusted computing has no way to constrain the software that can execute on a computer. Even when trusted computing mechanisms are used, there is nothing in trusted computing to constrain the choice of software or behavior that must be associated with data. Anyone with unrestricted access to plain-text data can associate any software with that data using trusted computing.
• "However uncooperative operating systems do can misuse security features to prevent legitimate data exchange!" This comment could apply to any type of computer. Trusted computing does not encourage this type of behavior.
• The current description describes in multiple places the properties of a classical DRM system that uses trusted computing as a system component. This should be described in the DRM example, and explained as an application of trusted computing, namely that a third party associates third party data with a behavior that restricts the distribution of data. The DRM example should also explain that that a third party cannot discover or use the trusted computing mechanisms without permission from the computer’s owner.
• "Trusted computing encompasses six key technology concepts, of which all are required for a fully Trusted system". This confuses a DRM system with trusted computing. Further, trusted computing has three core concepts, one of which is not even mentioned in the reinstated description. In contrast, "secure input and output" is not an essential part of trusted computing and "memory curtaining" is a desirable but not essential component of a trusted computer (which needs isolation mechanisms); it may be present in some implementations and absent in others.
• "every Trusted Platform Module (TPM) is required to sign a random number". The accurate statement would be that every Trusted Platform Module (TPM) is required to be able to sign. Trusted platforms never need to do this unless the computer owner decides to reveal that they have a genuine trusted computer.
• "[EK] makes it impossible for a software TPM emulator, with a self-generated Endorsement Key, to start a secure transaction with a trusted entity". Actually nothing prevents the use of a software TPM with its own EK for a secure transaction with a trusted entity, if the entity trusts the software TPM. The EK in a hardware TPM just makes it impossible for a TPM emulator to pretend to be that hardware TPM.
• "[Sealed Storage] means the data can be read only by the same combination of software and hardware". This confuses the properties of trusted computing with that of a DRM system. Sealed Storage releases data to a particular combination of software and hardware, but that software can propagate the data in any way it chooses, and trusted computing provides mechanisms to help propagate the data. Hence data protected by trusted computing could be read by other software and hardware, or just by one combination of hardware and software.
• "This will prevent people from buying a new computer, or upgrading parts of their current one except after explicit permission of the vendor of the old computer." This confuses the properties of trusted computing with alleged effects of DRM systems. In trusted computing, the only upgrades that require permission from an OEM are upgrades to the trusted computing mechanisms that enforce reliable selection of behavior.
• "Remote attestation allows changes to the user's computer to be detected by authorized parties". This is overly simplistic. Remote attestation allows changes to the user's computer to be reported by authorized parties, such as the computer owner. It’s the authorized party that authorizes reporting who approves the receipt of attestation information by remote parties.
• "The computer can then present this certificate to a remote party to show that its software has not been tampered with." This is inaccurate, because the `certificate' only shows that that unaltered software is currently executing, not whether software has been tampered with.
• "Remote attestation is usually combined with public-key encryption so that the information sent can only be read by the programs that presented and requested the attestation, and not by an eavesdropper, such as the computer owner." This is at minimum misleading and arguably incorrect. There is no mechanism in trusted computing to prevent the owner obtaining an attestation value. In fact attestation cannot operate without permission from the computer owner. Encrypted attestation serves only to protect the owner's privacy from eavesdroppers. If the owner does not want privacy, the attestation value need not be encrypted.
• "secure I/O prevents the user from recording it as it is transmitted to the audio subsystem". This confuses the properties of trusted computing with that of a DRM system. There is no mechanism in trusted computing to prevent the user from recording an output.
• "remote attestation protects it from unauthorized software even when it is used on other computers". It's hard to be sure what this means. It might mean that remote attestation can be used to ensure that a trusted computer will respect the data that it is accessing on a remote computer.
• "Remote Attestation use, however, has been discouraged in favour of Direct anonymous attestation". It should be made clear that DAA is a substitute for a Trusted Third Party, not a substitute for remote attestation. An additional complication is that DAA can be used with a Trusted Third Party.
• The current description states in multiple places that a CA generates an AIK. This is incorrect. In fact the TPM generates AIKs and a CA provides certificates for AIKs.
• "These three credentials will in short be referred to as "EK". The EK is a platform specific key that uniquely identifies the platform." These statements are contradictory and unhelpful because, in trusted computing, the EK is a key and is not a certificate or three certificates.
• "The EKpub will uniquely identify the endorser of the platform, model, what kind of software is currently being used on the platform, details of the TPM, and that the platform (PC) complies with the TCG specifications". This is confusing because various certificates, which reference the EKpub, describe the platform properties. Also, these certificates do not identify what kind of software is currently being used on the platform.
• "Allegedly, this will provide the user with anonymity". In fact it will provide pseudonymity unless each AIK is used only once.
• "If the Verifier accepts the DAA supplied it will produce an AIK" Again, it’s the TPM that produces AIKs.
• "If the anonymity of the user as a whole will be increased by the new version is another question". There's mathematical evidence that DAA anonymizes the TPM. All that trusted computing can try to do is not make anonymity any worse than it already is (because of other factors).
• "One can easily question the objectives of the Issuer, since this most commonly will be the platform manufacturer." The issuance of an EK and EK credential (or a DAA credential) by an OEM is a value-added service provided by the OEM to their customers. Without such credentials, it’s difficult for a private customer or small business or organisation to convince others that they have a genuine trusted platform. (A famous company or organisation could produce its own credentials and reply upon its brand to convince others that they have a genuine trusted platform.) There's mathematical evidence that an Issuer can't use DAA to obtain or deduce any linkage between a specific trusted computer and the usage of that computer.
• "Another key question is what kind of information will be supplied to the Issuer in order to obtain the DAA credentials". The answer to this question is that the DAA protocol uses the pubEK (and by implication) the EK certificate to create DAA credentials. Walklooker ( talk) 10:32, 4 April 2010 (UTC)
Thank you for making constructive comments. I actually did try simple editing (on a local copy of the Wikipedia page) and suggested the revamp because I ended up with more new text than original text. If we start again, the structural changes would be removal of the confusion with a DRM system (perhaps putting that information into the `DRM application' section), followed by corrections and new information. So there will still eventually be large alterations. Would you be prepared to help with this? Walklooker ( talk) 09:23, 20 April 2010 (UTC)
I bought a new computer in 2012 that did not come with this, so article is wrong. -- 209.188.32.201 ( talk) 17:17, 12 July 2013 (UTC)
The 'Trusted third party' section in this article has had a 'needs cleanup' tag on it since May 2010 because it has no citations and is not written in an encyclopedic tone. I'm replacing this entire section with a link to Trusted third party, because that article appears to cover the topic much more thoroughly and more properly. It would be nice for the 'Trusted third party' section here to have a brief summary of the topic in addition to the link, but I wasn't able to learn enough from that other article to summarize it in a useful way. - Brian Kendig ( talk) 17:26, 7 January 2024 (UTC)
A substantial portion of this article is devoted to speculation about potential applications of computer security.
Speculation about potential application for new technologies isn't knowledge. 99 44/100ths% of it is as ephemeral as a soap bubble and yet less captivating and entertaining.
Its enough to say that the cost of being unable to prevent altered and malicious software from interfering with the intended function of everything from industrial machines to music players is sufficient to inspire innovators to apply every available tool, including trusted computing initiatives.
A final reason not to put speculative applications here is that experience has shown that security concepts simple enough to explain in a few sentences have proven to be easily defeated. Application proposals sufficiently complex to worth knowing about will also be long enough to merit a Wikipedia article of their own.
As an editorial comment on the pro/con optimist/defeatist saga here; computing technologies have been proposing and speculatively promising more than could be delivered in a generation since the 1950's. Like the bubbles on top of a glass of champagne, without them it would be much less, but they don't really add anything. TPM is the most recent in a long line of timely initiatives to increase the difficulty of defeating computer security. Its good and its helpful but without being able to prevent physical access to the platform it remains the same problem as communicating over an open channel. The time and cost to intercept and alter can be increased, but never insurmountably. PolychromePlatypus ( talk) 22:15, 5 May 2024 (UTC)