![]() | This ![]() It is of interest to the following WikiProjects: | ||||||||||
|
Some of this seems very subjective. —Preceding unsigned comment added by 68.225.92.194 ( talk) 00:36, 22 August 2009 (UTC)
It seems to me that a section on DPI requirements is sorely lacking in this article. Unfortunately, I don't know where to find the physiological data references to back up the claims in the following posting by Kelly Flanigan, which seems to be the most rational explanation I've found. Unfortunately, it's not a reputable enough source. —Preceding unsigned comment added by Jaxelrod ( talk • contribs) 15:01, 17 October 2008 (UTC)
Since inches are now only an American unit* the spellings on this page should be in American English. Similarly, a page on the British monarch or Australian government would use British spellings, and a neutral article (i.e., one on molecular biology) could use either or both spellings. SteveSims 04:39, 5 January 2007 (UTC)
*
Excluding a few non-English speaking countries.
I hadn't noticed before that a separate article existed for DPI until Bobblewik helpfully pointed it out. I have arrogantly decided to simply redirect DPI to this article, wiping out the previous contents of that article. Here is why:
Of course, the history is preserved, if anyone wants to extract any useful nuggets of information from it. -- Wapcaplet 20:57, 12 Aug 2004 (UTC)
From an editor: Draw a 1-inch black line on a sheet of paper and scan it. If the resulting image shows a black line with a width of, say, 300 pixels, then does the scanner not capture at 300 PPI? 128.83.144.239
I moved the above anon comment from the article to here. It was posted by way of explaining the setence "A digital image captured by a scanner or digital camera has no inherent "DPI" resolution until it comes time to print the image..." I'm not sure how it helps to explain this point, however; indeed, it seems to stem from the very misunderstanding of DPI that is being explained (that DPI and PPI are not the same thing, and SPI is yet another thing entirely). -- Wapcaplet 23:42, 23 Sep 2004 (UTC)
DPI is mostly used to to tell how big a resolution an image should be printed at. Of course, this should be really be "pixels per inch"! But since it has become a standard, I think it should be explained here.-- Kasper Hviid 18:33, 9 Nov 2004 (UTC)
I wasn't able to find the references you gave on lexmark.com or olympus-europa.com, but it doesn't surprise me that DPI would be used in this broad way in documentation intended for the general consumer. I've seen scanner software that uses DPI to mean samples per inch. "Dots" is a fairly general term for most people; pixels, color samples, and ink spots could all be "dots" to most people. I don't think there's any call for distinguishing an "official" use of the word. It's official if people use it that way, and (I suppose) if it's defined that way in the dictionary, which it is. It's really only the more technical among us who care to differentiate DPI from PPI and SPI.
As for the purpose of describing screen resolution, I suppose it's probably useful in calibrating a computer display to a printing device. If a print shop needs to have things displayed on their computer monitor at the same size they will be printed, monitor PPI is useful. I thought about adding to the pixels per inch article to include the idea of pixel density on paper, but while it makes sense to me, I know of no other instances of PPI being used in that way. If you find such a usage, let me know! -- Wapcaplet 22:59, 10 Nov 2004 (UTC)
---
Yes, since dpi and ppi are obviously used interchangeably, this interchangeability should be explained here... so that novices looking here for definitions can understand what they find to read in the real world. It seems extremely arrogant to say "Wrong" and "misuse" when obviously pixels per inch has always been called dpi. And still is.
Do some few say ppi? Yes. Do the vast majority say dpi. Absolutely yes.
1. All scanner ratings are specified as dpi, obviously meaning pixels per inch. They dont say "samples per inch", they all say dpi, which we all know means pixels per inch. Scanners create pixels, not ink dots. Who are you to call every scanner manufacturer wrong?
2. All continuous tone printers (dye subs, Fuji Frontier class, etc) print pixels, and call their ratings dpi too (colored dots also called pixels). Who are you to call all these manufacturers wrong?
3. The most current JPG image file format specification claims to store image resolution in "dots per inch". The most current TIF file format specification claims to store image resolution in "dots per inch" They are referring to pixels... there are no ink dots in image files. Who are you to call these authors of the most common file format specifications wrong?
http://www.w3.org/Graphics/JPEG/jfif3.pdf (page 5)
http://partners.adobe.com/public/developer/en/tiff/TIFF6.pdf (page 38)
4. Google searches on 7/14/06 for
"72 dpi" 17,200,000 links
"72 ppi" 124,000 links
(138 times greater use of dpi... a couple of magnitudes more useage)
You may be aware that 72 dpi topics are never about printer ink dots.
When calling everyone else wrong, a wise man would reevaluate his own position. The Wikipedia author who claims misuse of dpi is obviously dead wrong. It is probably only his wishful thinking that the world OUGHT to be as he wishes it to be, but it is just his imagination, and this Wiki definition is definitely WRONG.
The two terms are obviously interchangeable. Wake up, look around, where have you been? Pixels per inch has ALWAYS been dpi. Yes, dpi does also have another use. So what? Almost every English word has multiple meanings and uses. However which term is best is not important here - this is certainly not the place to decree it (as attempted). Both terms are obviously used with the same meaning (pixels per inch) and that matter is long settled. Say it yourself whichever way you prefer to say it, but we obviously must understand it both ways. Because we see it everywhere both ways. So this both-ways phenomena needs to be explained in the definitions here. Without bias. About how the real world really is, not about how some author might dream it ought to be.
WHAT IS IMPORTANT is that beginners need to know the two terms are used interchangeably everywhere, with both terms meaning pixels per inch, simply so they can understand most of what they will find to read about the subject of imaging. There is no reason to confuse them even more by telling them everything they read is wrong. Wiki is wrong. The Wiki definition can only totally confuse them.
Beginners do need to know the two concept differences (your two definitions), but once the concepts are known, then the terms are almost arbitrary.. We could call them "thingies per inch". The context determines what it means (like all English words), and if the context is about images, dpi can only mean pixels per inch (ppi can mean that too). If the context is about printer ratings, then dpi can only mean ink dots per inch. 71.240.166.27 03:20, 14 July 2006 (UTC)
DPI is the CORRECT term for the target resolution at which an image is to be printed or displayed. It is a value stored in a digital file which indicates the current target printing resolution of that file. To use it otherwise is to sow confusion out of some misguided ideology. —Preceding unsigned comment added by 24.128.156.64 ( talk) 18:19, 10 October 2007 (UTC)
I see several printers advertised with "4800 x 1200 color dpi" and such. Is this some kind of industry conspiracy to redefine the term "dpi"? Or am I misunderstanding something? Example: [1] -- Anon
FSDGSDFG — Preceding unsigned comment added by 115.248.176.253 ( talk) 06:24, 3 November 2011 (UTC)
^ I agree with the last poster. What were you guys even talking about? Grab a ruler, and a microscope, print something on one of those printers at "4800x1200 dpi", and count how many ink dots are placed into each horizontal and vertical inch of paper. "DPI" has referred to what spatial resolution a printer is capable of (easily provable, at least on older devices, by making a complex 900x900 pixel image, printing at maximum quality, and checking how big it came out and whether any detail was missing) for as long as I can remember being involved in the computer scene in any way, which goes back a good 24 years (to early 1990). For example, some old 9-pin Kyrocea dot matrix was apparently capable of 240x160 dpi... Chunky, but better than the 216x144 of some cheap rivals. What exactly are you trying to disprove, here? 193.63.174.211 ( talk) 11:00, 19 February 2014 (UTC)
I am concerned about the following statement:
Computer displays work in a similar fashion to printers: they use a combination of different amounts of the primary colors (in this case, the additive primaries: red, green, and blue) to produce a wide range of visible colors. Most printers use the (subtractive) primaries and black in different combinations and patterns.— Kbolino 02:18, 10 February 2006 (UTC)
"The DPI measurement of a printer is dependent upon several factors, including the method by which ink is applied, the quality of the printer components, and the quality of the ink and paper used."
This is not true, or at least is confusing.
The DPI in the printing direction is dependent on the head firing frequency and the linear print speed. The DPI in the advance direction (perpendicular to the printing direction) is dependent on the spacing of actuators (e.g. nozzles for inkjet) on a head, and the angle of the heads. Each of these can be multiplied by use of interleaving/"weaving" using multiple passes and/or multiple heads.
What the sentence above may have been trying to get at is that different print modes can use different firing frequencies, linear speeds, interleaving factors, etc., and the effect of ink and media settings in print drivers is often to change the print mode (possibly in addition to other software settings that don't affect DPI). Also, different print head technologies may improve at different rates in terms of firing frequency, actuator spacing, etc.
On the subject of advertisements, I strongly suspect that some of the dpi figures quoted in printer adverts are inflated. You can inflate a dpi figure by:
This kind of creative arithmetic is all the result of trying to munge various resolution and quality factors into a single number for marketing purposes. It's similar to how clock speed used to be used to indicate how "fast" a processor was. At its worst, it can lead to distorted technical decisions that maximize DPI with no improvement in, or even at the expense of quality (just as the Pentium 4 design was distorted to maximize clock speed).
It's unlikely that someone could see a visible improvement in resolution above about 1000 dpi with the unaided eye at a normal reading distance. The extra quality that you can get from a higher dpi than that is not due to an increase in resolution; it's due to a reduction in "graininess" (and possibly better hiding of head defects) from using smaller drop volumes, which requires you to use more dots in a given space to achieve the same ink density.
To meaningfully compare printers, you need at the very least to know the volume of ink in a drop (for inkjet heads as of 2006, this can vary from about 1 to 80 picolitres), including what "subdrop" volumes are possible in the case of variable-dot heads, as well as the real dpi figure in each direction. The overall quality will also depend on halftoning algorithms, the gamut of the inks used, color management, positioning accuracy of the printer mechanism and any encoders, head defects and how well the print mode hides them, etc. The intended application is also significant: to give an extreme example, there's no point in achieving "photographic" resolution in a printer that will be used to print billboards -- although color gamut would still be very important for the latter.
DavidHopwood 00:16, 5 June 2006 (UTC) (working for, but not speaking for, a printer manufacturer)
"There are some ongoing efforts to abandon the dpi in favor of the dot size given in micrometres (µm). This is however hindered by leading companies located in the USA, one of the few remaining countries to not use the metric system exclusively."
I wouldn't blame US companies for this, even though I'm an enthusiastic S.I. advocate. Software interfaces to RIP packages and driver APIs require dpi, and there's no compelling reason to change them. Despite this, it is possible for a printer controller implementation to be internally almost S.I.-only. DavidHopwood 01:20, 5 June 2006 (UTC)
This is pretty trivial, but should the case (DPI or dpi) be standardized in this article? The last section uses dpi while the others use DPI. -- MatthewBChambers 09:13, 2 October 2007 (UTC)
Both of the external links are low quality links to pages by writers with an ideological axe to grind and a limitied understanding of the topic. —Preceding unsigned comment added by 24.128.156.64 ( talk • contribs)
This sentence "Therefore it is meaningless to say that a digitally stored image has a resolution of 72 DPI." is just simply, clearly unequivocally false. It is also misleading in a way that exacerbates existing confusion among users. —Preceding unsigned comment added by 24.128.156.64 ( talk) 16:44, 12 October 2007 (UTC)
Hmm, there seems to be some disagreement here about the origin of the term DPI. The position of the article is that it has its origins in printers, while 24.128.156.64 says that it had its origins in digital file formats. Does anyone have a reference to support either position? Personally, I think it's the printers. Rocketmagnet 17:10, 12 October 2007 (UTC)
I don't think it is only an issue of origin. It is, most importantly, an issue an issue of use. Users of graphics software get confused on this issue as it is. To deny the fact that all professional graphics software and most amateur graphics software allows the editing of a value called DPI which gets stored with the file adds to that confusion. Here is an example of a page using DPI correctly: http://msdn2.microsoft.com/en-us/library/ms838191.aspx—Preceding unsigned comment added by 24.128.156.64 ( talk) 17:23, 12 October 2007 (UTC)
~~~~
. :) --
jacobolus
(t)
18:07, 13 October 2007 (UTC)24.128.156.64 22:45, 13 October 2007 (UTC)
just my 2 cents on the words on top: the number of pixels is an hard coded value in any pixel format. the dpi value is sort of a scratch parameter in common image formats. people doing layouts and then prints have first to decide what width and height (measured in cm or inch) they want to select for the given image. only after that decision you can do the math and divide the number of pixels by the number of inches... some folks prefer talking in the news paper typical columns, e.g. 3 columns width - and thats a fixed length for a specific type of paper. -- Alexander.stohr ( talk) 16:18, 5 August 2010 (UTC)
Can editors who understand these things (I don't) add explanations to the article that make the kind of jargon typically found in printer specs understandable to laypeople?
Examples (taken from HP and Brother):
-- Lambiam 08:43, 24 April 2009 (UTC)
--ADDED QUERY (2009-04-30): At the start of this article, whose length I've yet to scrutinize, comes an example that sadly I find far from enlightening--to wit:
| An example of misuse would be if an LCD monitor manufacturer claimed that | a 320x240 pixel 3" monitor (2.4"x1.8") actually had a resolution of 400 DPI, | (three times the pixels per inch).
[NB: delete this last comma--NOT wanted before parenthesis, delimited by parens.]
PLEASE "show your work": i.e., what is being multiplied/divided-into what? E.g., I multiply 320x240 = 76_800, and 2.4x1.8 = 4.32 and think that density should result from 76_800 / 4.32 (= 17_777.77...), but that's not close to 400! ?? Dividing 76_800 by "400" gets me a 192 which I can't figure how to map ... . .:. It would be helpful, at this introductory point, to show the calculation! Thanks. (-; 216.194.229.45 ( talk) 13:18, 30 April 2009 (UTC)
quote from the article: Software programs render images to the virtual screen and then the operating system renders the virtual screen onto the physical screen; with a logical PPI of 96 PPI, older programs can still run properly regardless of the PPI provided by the physical screen. Useability and readability is heavily influenced by the technology (laser beamer, flat screen with/without contrast enhancements, cathode ray tube), by the viewers distance, the viewers individual vision capabilities and by some "crap" the operating system does, e.g. anti-aliasing fonts. even environment (lit, dark, foggy, reflections, ...) might play a big role every now and then. 99.5% of all computer progams will run on any screen with any PPI value - they just dont care about it. there are pretty few programs that need to fullfill any exact measures but rather sometimes the application offers a setup e.g. for the used font in editors and terminals. furthermore modern operating system offers lots of tuning for e.g. border widths, menu fonts, window decoration and so on. if someone wants to use an 8x8 font he can, but he can use an 14x16 font as well - the user is adapting to what he wants to see. if you connect a 12" tube or a 40" flat screen the operating system will rather respond with a desktop having more or less width than using the PPI value for adjusting to the changed conditions. BTW for much of legacy applications (e.g. a window of a C64 emulator) there are even zoom modes and nearly all modern consumer flat screens have even built in zoom even if folks like to use such devices in a 1:1 pixel match mode - forget about PPI and fix that statement in the article. -- Alexander.stohr ( talk) 16:27, 5 August 2010 (UTC)
Are the common monitor "pitches" given in terms of "dot pitch" or "dot trio pitch"? Mfwitten ( talk) 22:22, 1 October 2011 (UTC)
Just my luck to stumble, when looking for some clarification, on this article - one of those where the discussion is much longer then the article itself :-) There are lots of sentences in the article (and in the discussion) which I don't understand; others which seem confusing. But then again, I am far from a specialist in this field. Yet, maybe the following is helpful, if only to bring out the points of disagreement.
1. I think that in this article (and in the discussion) two objectives are intertwined, leading to confusion. I.m.o. an encyclopedia should a) describe the (various) common uses of a term; b) offer explanations and background knowledge. Therefore, I wholeheartedly agree that terms like 'wrong' or 'misuse of the word' or even 'misleading' should be avoided. However, an encyclopedia is not a dictionary; it should go one step further and EXPLAIN certain things. In fact, the various 'definitions' of dpi and ppi cannot even be comprehended without some background knowledge, and so would by themselves be of no use to any reader. Any explanation requires a strict definition of the terms used. Quite apart from any claim to 'correctness', when the explanation does not make a clear choice of words it becomes incomprehensible (which, i.m.o., it is right now). Both requirements need not be contradictory; one could very well describe the various ways in which a term is used, and nevertheless, when it comes to an explanation, use the terms in one specific well-defined sense.
2. So, what 'definitions' of DPI and PPI and other terms will the explanation use? In the discussion, it is not at all clear to me when the disagreement is about the use of a word and when it is about facts. Surely, these two kinds of disagreements should be separated as well as possible. In my opinion, following a historical line may be the most helpful to further comprehension.
- As far as I know the term DPI was first used for digital phototypesetting machines, like the Digiset (VideoComp in the USA), introduced in 1966. (I'm not sure about this, it would require verification) In the '80's, a digitital phototypesetter could achieve resolutions up to 3000 dpi. (or was it 4000?) DPI, then, describes the number of circular black dots, of varying size up till completely overlapping, per inch of paper. (The density of coloured dots was, at the time, described by different units.) When, for a colour printer, just 1 figure is given for the "DPI", it pertains to the number of black dots per inch and for a very good reason: to make it comparable to the DPI of a bl/w laserprinter, still widely in use.
I would propose to use the term "DPI", in the explanatory part of the article, in this sense only.
When two figures are given, like in "1200 x 4800 color dpi", I agree with Wapcaplet that the figures refer to black dots (used to print text and comparable to the figure for bl/w printers) and 4-colour dots (to print coloured pictures) respectively. This use of 'DPI' seems to me a quite reasonable 'extension' of the term DPI, to adapt it to the colour-print-era. (And frankly, I don't understand the objections raised by 76.126.134.152) The question remains, however, what to make of printer specifications where the second number does not equal 4x the first number (or vice versa, since there seems to be no rule on the order of the two numbers). Does anyone know what is meant by a ¨printer specification like "1200x2400 dpi"? I suspect it means that text is printed at 1200 dpi and colour pictures are printed at 600 dpi (that is: 600 dots of one colour per inch). Finally, I'd like to know what the 'x' between the two figures means; it suggests something like area, like 'horizontal and vertical'. If what I suppose here is true however, it has no meaning whatsoever and could just as well have been a dot or a comma or a semi-colon. This should be pointed out to the reader.
3. The term 'pixel' (and the related 'PPI') orginates in an entirely different field: the 'digitisation' of pictures. To make up a digital picture from an image, the image is overlayed with a grid of (theoretically) squares and for each square an everage colour and luminosity is determined, either by a scanner or a camera. The resulting 'bitmap', containing values for colour and luminosity of each pixel, is then saved in a picture file. I would propose to use the term 'pixel' in the explanatory part of the article, in this sense only.
Many file-formats (but I don't know which ones exactly) give the option of writing a value for PPI in the meta-section of the file, thus making it possible to determine the real size of the scanned original. Evidently, this value has no meaning for a picture of a landscape taken with a digital camera. (To my knowledge, camera's do not save this value when saving in, for instance, jpeg, but most scanning software does fill this value.)
4. The term PPI is, I think, not as clear as DPI. For scanners, the meaning would seem quite clear to me: it describes the size of the grid used to scan the picture. PPI, used in that sense, is a useful measure for the 'resolution' of a scanner. In practice, however, I rarely see PPI in the specs of scanners; manufacturers seem to prefer the more widely known term 'DPI' instead, and it is, I think, anybody's guess what they mean by that. Some may simply use 'DPI' when they mean 'PPI' (thus giving useful information on the quality of the scanner); others may use 'DPI' to simply refer to the measurements of the scanned picture as layed down in the meta-section of the resulting JPG or TIFF file, which has nothing whatsoever to do with the quality of the scan. (In practice, I met both)
For camera's, the use of the term 'PPI' seems less common, and for good reason: it is not clear what it refers to. When used, however, PPI does indeed seem to refer to the size of the sensor: given a total nr. of pixels, the bigger the chip (and thus: the lower the PPI value), the better the quality.
I would propose to use the term PPI, in the explanatory section of the article, primarily to denote the resolution of a scanner. (The meaning when used in reference to camera's is somewhat confusing, as a lower figure is associated with better quality, which is quite the reverse from it's meaning when used with respect to scanners.)
5. - The concept of dpi could conceivably be extended to monitors, as these, too, are output-devices and the original CRT's used coloured dots. However, the meaning is not as clear, as a monitor does not produce 'black dots'. The term 'dot-pitch' has always been more popular for monitors. (Although I don't know whether this referred to the distance between individual dots, or between the dots of one colour, or between groups of 3 coloured dots.) In fact, most modern lcd monitors do not produce dots at all; instead, they work with squares built-up from 3 coloured stripes, usually denoted as 'pixels'. (I'd propose to use the term screen-pixels to avoid confusion.) Most often, the resolution is described as "X*Y pixels", while the physical size is described by the length of a diagonal across the screen.
Thus, for most people the terms DPI and PPI in connection with a monitor doesn't add much, except confusion. In graphic design, though, you may want to make the size on-screen exactly the same as the size in print. In practice, you need to know the PPI of your screen to achieve this. One can calculate it as described in the article or simply take the vertical no. of screen-pixels and divide it by the screen-height in inches.
5. Some remarks
- Digital printers produce dots in a certain colour. These dots can vary in size, and are thus arranged that they they can overlap, ultimately, when fully overlapping, producing black. (I'd put an asterisk here, as this is very subtle stuff, concerning the way our eyes perceive colours etc., but I think this description may do in this context.) Screens, on the other hand, from the CRT'S of old to modern LCD or plasma, do NOT vary the dot-size, but they CAN vary the lumininosity of the dots (or stripes or whatever). The dots can be circular, but some printers can produce elliptical or even semi-square dots; the printing-software uses this option to produce the best-looking output. I quote from http://www.prepressure.com/printing-dictionary/d "The dot shape is varied to minimize the dot gain at the point where dots join one another. Elliptical dots minimize the sudden dot gain where corners of dots connect; they may connect in their short direction at 40% dot area and in their long direction at 60% dot area. Round dots, often used for newsprint, may not connect until 70% dot area." (Here, I'm out of my depth again: hopefully someone knows more about this. Although, on the other hand, it doesn't seem very relevant for this article.)
- It should be noted (and I sorely miss this in the article) that the term DPI when referring to dots of one colour (notably black) is still highly relevant when used to describe a printer. Text, as opposed to pictures, is often, if not always, delivered to the "printing engine" as a vector format, which is translated to a dot pattern according to the specifications of the printer. In other words: a printer with a higher dpi specification will give you crisper text on your print. The same is of course true for pictures delivered to the 'printing engine' in vector format. (Ensuring that a vector drawing does indeed profit from the maximum dpi of the printer seems to me an art in itself when you're not using Adobe software. Yet, this seems to me beyond the scope of this article)
- Obviously, to print a bitmap (that is: a file describing pixels and resulting from a scanner or camera), the pixel pattern has to be 'translated' to the dot-pattern of the printer. A "1:1 print" simply translates each pixel into the appropriate sizes of the dots in one 'colour group' (which may consist of 4, 5, 7 or even 12 different colours - but that's another article) Obviously, the better the printer, the smaller, and crisper, the "1:1" print. When enlarging a picture in print beyond "1:1", one pixel is spread over a number of groups of dots. Thus, the picture gets vaguer, up till the point where the individual pixels become visible. When reducing the size below "1:1", more then one pixel is available to determine the dot-sizes for the printer, resulting in a print as good as the printer can achieve. (Theoretically, one would expect there to be certain favourable proportions, e.g. "4 pixels to one colour-group', which is, theoretically, much easier to render then "1.7 pixels to one colour-group). However, in practice the rendering software is very sophisticated and there seems to be hardly any gain using such simple proportions. (Here again, I'm out of my depth; but maybe this is way besides the scope of this article anyway)
5. My comments on the article In view of the above, I have a number of comments on and questions about this article.
- "The DPI value tends to correlate with image resolution, but is related only indirectly." This seems to me unclear, if only because 'correlation' is not a term many people understand. But apart from that: DPI and PPI are either synonymous or (as I propose to use the terms) they have no correlation whatsoever, not even 'indirectly'.
- The article starts by explaining 'monitor resolution', which is the most problematic use of the term dpi. Bad idea, I think.
- "A less misleading term, therefore, is pixels per inch." I don't see what is misleading and I don't see the 'therefore'.
- "the measurement of the distance between the centers of adjacent groups of three dots/rectangles/squares on the CRT screen." 1. To my knowledge, there are no CRT-screens with squares or rectangles. 2. To my knowledge, there are no LCD screens with 'groups of squares' 3. Is this true? In other words, could the text be amended by "the measurement of the distance between the centers of adjacent groups of three dots on an CRT-screen, or between the centers of two squares (each consisting of 3 coloured rectangles) on a LCD screen."?
- "DPI is used to describe the resolution number of dots per inch in a digital print and the printing resolution of a hard copy print dot gain; the increase in the size of the halftone dots during printing. This is caused by the spreading of ink on the surface of the media." This sentence forms the heart of the article in the sense that it defines the term that is the title of the article. Yet it contains so many unclarities, that I'll have to take them one-by-one: "DPI is used to describe the resolution number of dots per inch" Should this not be either "to describe the resolution" or "to describe the number of dots per inch". "and the printing resolution of a hard copy print dot gain" I fail to see why this should be added; I have no idea what the difference is between a 'digital print', as described in the first part of the sentence, and a "hard copy print" as described in this second part. I have no idea what a 'dot gain' is. "the increase in the size of the halftone dots during printing" I don't know how this is connected to the statement before the semi-colon; I don't know what to make of 'halftone dots',nor what dpi has to do with "spreading of ink on the surface of the media".
In summary: this definition is totally unclear to me.
- "Up to a point, printers with higher DPI produce clearer and more detailed output." Up to which point?
- "A printer does not necessarily have a single DPI measurement; it is dependent on print mode, which is usually influenced by driver settings." This seems clumsily put; printers always have a maximum dpi (and this is not a measurement but a value). What dpi is effectively used depends on user's choices. (notably choosing 'economy mode')
- "An inkjet printer sprays ink through tiny nozzles, and is typically capable of 300-600 DPI.[1] A laser printer applies toner through a controlled electrostatic charge, and may be in the range of 600 to 1,800 DPI." As the definition of dpi is unclear, so are these statements. Yet, quoting higher values for laserprinters then for ink-jet printers seems to me doubtful in whatever sense the words are taken.
- "The DPI measurement of a printer often needs to be considerably higher than the pixels per inch (PPI) measurement of a video display in order to produce similar-quality output. " Is this so? Why? Why is this Often? How often?
- "This is due to the limited range of colours for each dot typically available on a printer. " Yes, very limited indeed: just one.
- "At each dot position, the simplest type of colour printer can print no dot, or a dot consisting of a fixed volume of ink in each of four colour channels" This is, ithink, not true. Even allowing for the fact that the word 'dot' is used here (very confusingly) to denote a 'dot-group' consisting of dots in all colours the printer is capable of, this is still unnecessarily opaque; I wouldn't know what a 'colour channel' is, for instance. Nor can I see how 'the simplest type of colour printer' works differently, in this respect, from the 'most advanced type of colour printer'. Nor is there any 'ink' in my laser-printer. Finally, the 'fixed volume of ink' is to my knowledge simply untrue; the whole point of colour printing is that the size of the dots (and thus the volume of the ink applied in the case of an ink-jet) varies.
- "typically CMYK with cyan, magenta, yellow and black ink) or 2e4 = 16 colours" This, too, is to me incomprehensible. Prints are made with dots of varying size (not with a 'fixed volume of ink'). The principle of colour printing is partly based on the fact that ink-and toner colours, like normal paint, can produce mixed colours when they overlap and also when they are spaced apart - a phenomenon well known to painters, notably the impressionists. Our eyes being able to discern just 3 colours, all possible colours can be achieved by printing dots of varying size in 3 colours (in practice 4 or more are used). I have no idea why the number '16' would be relevant here, nor where the formula comes from. In fact, I have no idea why all this is relevant to the article.
- "Higher-end inkjet printers can offer 5, 6 or 7 ink colours giving 32, 64 or 128 possible tones per dot location." Incomprehensible - see above.
- "Contrast this to a standard sRGB monitor where each pixel produces 256 intensities of light in each of three channels (RGB)." I have no idea what it is I am supposed to be contrasting here, but it DOES throw up a point: surely, the variation in size of a printer dot comes in discrete steps. I have, however, never seen this in the specification of a printer. Does anyone know more about this?
- "While some colour printers can produce variable drop volumes at each dot position, " Apart from the fact that this is, again, ink-jet-talk and that it is not the volume produced, but the dot-size produced which is relevant, I would like to know what colour printer is NOT able to do this.
- "the number of colours is still typically less than on a monitor." Why would that be? I can't follow the explanation. Nor can I see the relevance with respect to the explanation of the term 'dots per inch'.
- "if a 100×100-pixel image is to be printed inside a one-inch square, the printer must be capable of 400 to 600 dots per inch in order to accurately reproduce the image." This, I think, is true. But: First, the term 'accurately' seems unnecessarily vague. Hereabove, I used prnting "1:1" to denote printing a picture with one pixel translating to one 'colour-group' on the printer. (why not be precise instead of talking about 'accurately' or 'faithfully' and such) Second, the explanation leading up to this fact makes it seems like it's rocket science, while in fact it's quite trivial: To print 100 pixels "1:1", the printer uses 100 colour groups; in case of a four-colour printer, this means 400 coloured dots. In case of a 6-colour printer, this means 600 dots.
- Section "DPI or PPI in digital image files" No factual quibbles here; just the observation that the wording is imprecise, which doesn't help i.m.o. to further comprehension, and that some descriptions use difficult words unnecessarily. For example: "Some digital file formats record a DPI value, or more commonly a PPI (pixels per inch) value". Comment: Formats don't record anything, but computer programs do. Some formats offer the possibility of recording PPI - I do not know of ANY format offering the option of recording DPI. MANY computer programs confuse dpi and ppi and represent the ppi-value as dpi. A PPI value in the file only has meaning when recorded by a scanner program; camera's often record some value (Nikon gives 300 ppi, Canon gives 180 ppi) but these values are entirely without meaning. "If it is labeled as 250 PPI, " What does 'labeled' mean? Let's be precise. "An image may also be resampled to change the number of pixels". Incomprehensible seen the level of the article. Moreover: what has this to do with the explanation of "Dots per inch"?
-Precise wording can, I think, help the understanding. For example, instead of: "Changing the PPI to 100 in an image editing program would tell the printer to print it at a size of 10×10 inches. However, changing the PPI value would not change the size of the image in pixels which would still be 1,000 × 1,000." I would say: "Changing the PPI-setting in the 'description' part of an image file (which can be done with an image editing program) would tell the printer to print it at a size of 10×10 inches. This, of course, does not change the the pixels in the image file: the picture still consists of 1.000 x 1.000 pixels.
- Section "Computer monitor DPI standards" I.m.o. this is a good piece, drawing attention to what is indeed a major source of confusion. But the first time I read it I couldn't make head nor tails of it. While reading, I was waiting to learn what 'the problem' is and what 'confusion' was sown by Microsofts choice. I didn't get that, and I still don't get it. Furthermore, it doesnt help that the use of terms is a bit 'loose', while some odd expressions ("a resolution of 1 megapixels", "the intended size of the image") may put an unsuspecting reader on the wrong foot (as it did me). Also, to introduce the word 'vector image' the first time in the article with the sentence "For vector images, there is no equivalent of resampling an image when it is resized" seems quite inadequate. (there's not even a reference to some other article here) The core of the matter is not at all hard to describe: like I said hereabove: to make the screen image the same size as printed output, you need to know the ppi value of your monitor, and instruct the software accordingly. (here, now: I said it in one sentence) Overrating the PPI of the monitor leads, at least in serious graphics applications, to a 'larger-then-life' picture on-screen.(This, by the way, could be explained in some more detail) However, I still don't see any 'problem' or 'source of confusion'.
I suspect (but is this true?) that a temporary problem has been that Apple software, being written for Apple monitors, did not allow for the the software being instructed this way: it simply supposed the monitor was 72 PPI, Apple's 'standard value'. However, by now all Apple graphics software can be so adjusted. (can't it?) Windows, of course, had no say whatsoever over the PPI of the monitor used, and thus windows software always went with the value used by the OS, which could be adjusted by the user in accordance with the specs of his monitor. (To appease the Apple-fans: the windows graphical software was, at the time, years behind Apple graphics software.)
All in all, the only possible 'sources of confusion' I can see are these: - Microsoft consistently uses DPI for PPI (where 'PPI' stands for 'screen-pixels per inch') and the dialogue box used to adjust the value, insists that one sets "The DPI value of the screen". This, of course, is nonsensical, since the "DPI value of the screen" is a fixed given, not a user's choice. I bet this wording has spread lots of confusion; it should have been something like "What is the PPI-value of your monitor?", with an explanation about how to determine this value, and the consequences of making the software believe it is higher or lower then it actually is. - Second possible source of confusion: the many, many articles on the web making a big fuss about 'Apple's 72 dpi" vs "Microsofts 96 dpi", where in fact all this is of interest to IT-historians only, or so it would seem to me.
In summary: I think this section, as it stands, is long and doesn't add to comprehending the term "Dots per inch". Propose to strike, or else to re-write in such a way that it deals better with common misunderstandings about "72 dpi" and "96 dpi". — Preceding unsigned comment added by Mabel2 ( talk • contribs) 17:20, 9 January 2012 (UTC)
Section "Computer monitor DPI standards" is confusing and wrong, better to remove it. — Preceding unsigned comment added by 2.34.179.118 ( talk) 10:46, 14 March 2013 (UTC)
...I don't know if it's just me being dumb, or what, but couldn't a whole load of the existing discussion on the article re: zoom levels and so on be replaced by a paraphrasing of "[...] meaning that on a Windows PC, to display the same piece of text with the same pixel resolution as on a Macintosh at '100%' zoom, on an otherwise identical display, the user would instead have to set a zoom of 75%".
You know, given that 1.3333r is the reciprocal of 0.75 and all... (ie, 1 divided by 4/3 is the same as 1 multiplied by 3/4...). The current version really seems to overcomplicate things, given that it's such a simple mathematical relationship with such a simple solution, should you need to obtain something closer to Mac-like output (well, you'd also need to adjust the H and V size pretty much to their minimum settings even on a relatively small PC monitor, and even considering the extra 128 H & 138 V pixels, given how tiny the original 128 and 512k monitors were).
The rest of the talk about how different monitors and video cards can produce output with variable real-life PPI, along with all the examples, as fascinating as they are for a techno-cruft historian, are pretty irrelevant overall, not to mention fairly self-evident to anyone who's seen more than one model of computer monitor in their lifetime. It may as well be a table of apparent PPI for different size SD and HD televisions.
Incidentally, the part in the lede about monitor pitch could probably also be cut down to "dot pitches from around 0.42mm to 0.22mm, decreasing over time, and mainly in the low 30s or high 20s", instead of the (still non-exhaustive) list of more than half a dozen very similar sizes starting at 0.39mm. Incidentally, I've owned a VGA monitor with a 0.42 pitch (coarse as hell, but it existed, was dirt cheap when bought, and was still sufficient, with barely any border on a 14" monitor with typical bezel thickness, for a reasonably clear 640x480 or 640x350 and a just-about-passable 720x400 thanks to the subpixel effects of mostly-monochrome text with tall pixels being displayed over multiple horizontally-offset triangular (hexagonal?) RGB clusters... 0.39 was more the practical max for actually-sharp textmode, and you needed a lower figure for SVGA (0.31? 0.34 for 15"?) or XGA (0.26 at 15", 0.31 at 17"?), but if you're mainly using Windows in VGA or playing games at 320x200 it's irrelevant), and I've reason to believe that wasn't even the worst, but I haven't much way of proving that. Also, it doesn't clarify that this only applied to colour monitors - the monochrome, single-phosphor, dotmask-less CRTs used in early Macs, for hi-rez output from the Atari ST and Amiga, EGA/VGA mono, MDA/Hercules, and in 15khz form as a cheap option for non-game/graphics use of various 8- and 16-bit home computers (including CGA PCs and PC Jrs) didn't HAVE any real concept of dot pitch, beyond the actual pixel output frequency, scanline count, and how tightly focused the electron beam was, all of which being somewhat arbitrary but typically giving a perceptually "clearer" and "sharper" appearance than the fuzzier-edged pixels of a typical colour TV or monitor. Certainly any suggestion of it being relevant to pre-colour Macs or to Hercules-equipped PCs is misleading. 193.63.174.254 ( talk) 17:17, 14 March 2017 (UTC)
There is a move discussion in progress on Talk:Kilometres per hour which affects this page. Please participate on that page and not in this talk page section. Thank you. — RMCD bot 00:59, 10 December 2013 (UTC)
I've produced this rather more accurate one, but I don't have a Wiki Commons account and really don't want to end up adding to my teetering stack of one-hit accounts for no good reason. Could someone grab this and upload it in my stead? Take the credit if you like :p ... just so long as you also resist the temptation to rename it "Blue Balls.png" (even though that's better than the current file's name...)
It's currently hosted at: http://imgur.com/xwmMaAr
Thanks! 193.63.174.211 ( talk) 11:07, 19 February 2014 (UTC)
I looked for this article to explain if "dots per inch" normally refers to a linear measurement (e.g. 100 horizontal pixels per horizontal inch) or to an area measurement (e.g. 100 pixels in a 1 inch by 1 inch square). Maybe confusion over this isn't very common, but the article never addresses this directly (I was able to figure out from some examples that it is linear). — Preceding unsigned comment added by 77.58.20.100 ( talk) 22:15, 4 January 2015 (UTC)
There is an odd line at the beginning of this section that is totally non sequitur:
For audio compression, see DPCM.
Huh? I think it should be removed. — Preceding unsigned comment added by Rwillfrd ( talk • contribs) 15:36, 8 May 2015 (UTC)
The article now says ( diff):
a 12-point font was represented with 12 pixels on a Macintosh, and 16 pixels (or a physical display height of maybe 19/72 of an inch) on a Windows platform at the same zoom
19/72 of an inch is 6.7 mm, that would imply the phisical pixel size of a display of 0.42 mm (6.7 mm divided by 16 pixels). That is the pixel size being larger than the typographic point (0.35 mm). But I never knew about such large pixel sizes today, I think such displays have not been being produced for the last 25 years at least ( some of the last).
Monitors for the last 20 years may have had such usual pixel sizes:
And so on. Most modern displays are aimed at 0.26 mm (96 DPI) pixel size or less.
I can only imagine some older small 14" monitors with the maximum resolution 640*480, that gives 0.45 mm. Or modern ≥36" LCD panels with no more than the 1920*1080 resolution. But I'm doubt that such large displays are used for ordinary work.
If we imagine the early days of MS Windows (the mid-1990s) with 15" monitors being most widespread (I believe), 16 pixels would be 6.08 mm (17/72") and 13 pixels 4.94 mm (14/72"). But everything depends on the actual pixel size of an actual monitor, so actual physical font sizes must vary greatly (if such a word can be applied to mm).-- Lüboslóv Yęzýkin ( talk) 17:54, 6 December 2015 (UTC)
I have removed a merge tag from Oct 2014 from Dots per inch#Computer monitor DPI standards. The merger proposal had no mirrored tag at the proposed target Pixel density, and this article already refers to Pixel density in a hatnote. This is without prejudice to any other editor proposing the merge again per WP:MERGE. Shhhnotsoloud ( talk) 15:56, 12 May 2017 (UTC)
the caption refers to a grid of 60x60 and then describes this as giving 36 points it seems obvious to me that the text was intended to read 6x6 not 60x60 so I have edited the text with that change. — Preceding unsigned comment added by 67.210.40.116 ( talk) 15:20, 9 June 2019 (UTC)
In my opinion those two units are used in such a way that a reader will believe that each of them can exist in both worlds e.a. "Dots per inch (DPI, or dpi[1]) is a measure of spatial printing, video or image scanner dot density".
DPI can not exist in "video or image scanner"
I haven't read the whole article (I'm a graphic worker here and at commons) but just wants to point out this so the article can be considered to be reviewed. --always ping me--
Goran tek-en (
talk)
11:55, 13 November 2021 (UTC)
@#### 103.174.45.11 ( talk) 11:51, 13 July 2022 (UTC)
![]() | This ![]() It is of interest to the following WikiProjects: | ||||||||||
|
Some of this seems very subjective. —Preceding unsigned comment added by 68.225.92.194 ( talk) 00:36, 22 August 2009 (UTC)
It seems to me that a section on DPI requirements is sorely lacking in this article. Unfortunately, I don't know where to find the physiological data references to back up the claims in the following posting by Kelly Flanigan, which seems to be the most rational explanation I've found. Unfortunately, it's not a reputable enough source. —Preceding unsigned comment added by Jaxelrod ( talk • contribs) 15:01, 17 October 2008 (UTC)
Since inches are now only an American unit* the spellings on this page should be in American English. Similarly, a page on the British monarch or Australian government would use British spellings, and a neutral article (i.e., one on molecular biology) could use either or both spellings. SteveSims 04:39, 5 January 2007 (UTC)
*
Excluding a few non-English speaking countries.
I hadn't noticed before that a separate article existed for DPI until Bobblewik helpfully pointed it out. I have arrogantly decided to simply redirect DPI to this article, wiping out the previous contents of that article. Here is why:
Of course, the history is preserved, if anyone wants to extract any useful nuggets of information from it. -- Wapcaplet 20:57, 12 Aug 2004 (UTC)
From an editor: Draw a 1-inch black line on a sheet of paper and scan it. If the resulting image shows a black line with a width of, say, 300 pixels, then does the scanner not capture at 300 PPI? 128.83.144.239
I moved the above anon comment from the article to here. It was posted by way of explaining the setence "A digital image captured by a scanner or digital camera has no inherent "DPI" resolution until it comes time to print the image..." I'm not sure how it helps to explain this point, however; indeed, it seems to stem from the very misunderstanding of DPI that is being explained (that DPI and PPI are not the same thing, and SPI is yet another thing entirely). -- Wapcaplet 23:42, 23 Sep 2004 (UTC)
DPI is mostly used to to tell how big a resolution an image should be printed at. Of course, this should be really be "pixels per inch"! But since it has become a standard, I think it should be explained here.-- Kasper Hviid 18:33, 9 Nov 2004 (UTC)
I wasn't able to find the references you gave on lexmark.com or olympus-europa.com, but it doesn't surprise me that DPI would be used in this broad way in documentation intended for the general consumer. I've seen scanner software that uses DPI to mean samples per inch. "Dots" is a fairly general term for most people; pixels, color samples, and ink spots could all be "dots" to most people. I don't think there's any call for distinguishing an "official" use of the word. It's official if people use it that way, and (I suppose) if it's defined that way in the dictionary, which it is. It's really only the more technical among us who care to differentiate DPI from PPI and SPI.
As for the purpose of describing screen resolution, I suppose it's probably useful in calibrating a computer display to a printing device. If a print shop needs to have things displayed on their computer monitor at the same size they will be printed, monitor PPI is useful. I thought about adding to the pixels per inch article to include the idea of pixel density on paper, but while it makes sense to me, I know of no other instances of PPI being used in that way. If you find such a usage, let me know! -- Wapcaplet 22:59, 10 Nov 2004 (UTC)
---
Yes, since dpi and ppi are obviously used interchangeably, this interchangeability should be explained here... so that novices looking here for definitions can understand what they find to read in the real world. It seems extremely arrogant to say "Wrong" and "misuse" when obviously pixels per inch has always been called dpi. And still is.
Do some few say ppi? Yes. Do the vast majority say dpi. Absolutely yes.
1. All scanner ratings are specified as dpi, obviously meaning pixels per inch. They dont say "samples per inch", they all say dpi, which we all know means pixels per inch. Scanners create pixels, not ink dots. Who are you to call every scanner manufacturer wrong?
2. All continuous tone printers (dye subs, Fuji Frontier class, etc) print pixels, and call their ratings dpi too (colored dots also called pixels). Who are you to call all these manufacturers wrong?
3. The most current JPG image file format specification claims to store image resolution in "dots per inch". The most current TIF file format specification claims to store image resolution in "dots per inch" They are referring to pixels... there are no ink dots in image files. Who are you to call these authors of the most common file format specifications wrong?
http://www.w3.org/Graphics/JPEG/jfif3.pdf (page 5)
http://partners.adobe.com/public/developer/en/tiff/TIFF6.pdf (page 38)
4. Google searches on 7/14/06 for
"72 dpi" 17,200,000 links
"72 ppi" 124,000 links
(138 times greater use of dpi... a couple of magnitudes more useage)
You may be aware that 72 dpi topics are never about printer ink dots.
When calling everyone else wrong, a wise man would reevaluate his own position. The Wikipedia author who claims misuse of dpi is obviously dead wrong. It is probably only his wishful thinking that the world OUGHT to be as he wishes it to be, but it is just his imagination, and this Wiki definition is definitely WRONG.
The two terms are obviously interchangeable. Wake up, look around, where have you been? Pixels per inch has ALWAYS been dpi. Yes, dpi does also have another use. So what? Almost every English word has multiple meanings and uses. However which term is best is not important here - this is certainly not the place to decree it (as attempted). Both terms are obviously used with the same meaning (pixels per inch) and that matter is long settled. Say it yourself whichever way you prefer to say it, but we obviously must understand it both ways. Because we see it everywhere both ways. So this both-ways phenomena needs to be explained in the definitions here. Without bias. About how the real world really is, not about how some author might dream it ought to be.
WHAT IS IMPORTANT is that beginners need to know the two terms are used interchangeably everywhere, with both terms meaning pixels per inch, simply so they can understand most of what they will find to read about the subject of imaging. There is no reason to confuse them even more by telling them everything they read is wrong. Wiki is wrong. The Wiki definition can only totally confuse them.
Beginners do need to know the two concept differences (your two definitions), but once the concepts are known, then the terms are almost arbitrary.. We could call them "thingies per inch". The context determines what it means (like all English words), and if the context is about images, dpi can only mean pixels per inch (ppi can mean that too). If the context is about printer ratings, then dpi can only mean ink dots per inch. 71.240.166.27 03:20, 14 July 2006 (UTC)
DPI is the CORRECT term for the target resolution at which an image is to be printed or displayed. It is a value stored in a digital file which indicates the current target printing resolution of that file. To use it otherwise is to sow confusion out of some misguided ideology. —Preceding unsigned comment added by 24.128.156.64 ( talk) 18:19, 10 October 2007 (UTC)
I see several printers advertised with "4800 x 1200 color dpi" and such. Is this some kind of industry conspiracy to redefine the term "dpi"? Or am I misunderstanding something? Example: [1] -- Anon
FSDGSDFG — Preceding unsigned comment added by 115.248.176.253 ( talk) 06:24, 3 November 2011 (UTC)
^ I agree with the last poster. What were you guys even talking about? Grab a ruler, and a microscope, print something on one of those printers at "4800x1200 dpi", and count how many ink dots are placed into each horizontal and vertical inch of paper. "DPI" has referred to what spatial resolution a printer is capable of (easily provable, at least on older devices, by making a complex 900x900 pixel image, printing at maximum quality, and checking how big it came out and whether any detail was missing) for as long as I can remember being involved in the computer scene in any way, which goes back a good 24 years (to early 1990). For example, some old 9-pin Kyrocea dot matrix was apparently capable of 240x160 dpi... Chunky, but better than the 216x144 of some cheap rivals. What exactly are you trying to disprove, here? 193.63.174.211 ( talk) 11:00, 19 February 2014 (UTC)
I am concerned about the following statement:
Computer displays work in a similar fashion to printers: they use a combination of different amounts of the primary colors (in this case, the additive primaries: red, green, and blue) to produce a wide range of visible colors. Most printers use the (subtractive) primaries and black in different combinations and patterns.— Kbolino 02:18, 10 February 2006 (UTC)
"The DPI measurement of a printer is dependent upon several factors, including the method by which ink is applied, the quality of the printer components, and the quality of the ink and paper used."
This is not true, or at least is confusing.
The DPI in the printing direction is dependent on the head firing frequency and the linear print speed. The DPI in the advance direction (perpendicular to the printing direction) is dependent on the spacing of actuators (e.g. nozzles for inkjet) on a head, and the angle of the heads. Each of these can be multiplied by use of interleaving/"weaving" using multiple passes and/or multiple heads.
What the sentence above may have been trying to get at is that different print modes can use different firing frequencies, linear speeds, interleaving factors, etc., and the effect of ink and media settings in print drivers is often to change the print mode (possibly in addition to other software settings that don't affect DPI). Also, different print head technologies may improve at different rates in terms of firing frequency, actuator spacing, etc.
On the subject of advertisements, I strongly suspect that some of the dpi figures quoted in printer adverts are inflated. You can inflate a dpi figure by:
This kind of creative arithmetic is all the result of trying to munge various resolution and quality factors into a single number for marketing purposes. It's similar to how clock speed used to be used to indicate how "fast" a processor was. At its worst, it can lead to distorted technical decisions that maximize DPI with no improvement in, or even at the expense of quality (just as the Pentium 4 design was distorted to maximize clock speed).
It's unlikely that someone could see a visible improvement in resolution above about 1000 dpi with the unaided eye at a normal reading distance. The extra quality that you can get from a higher dpi than that is not due to an increase in resolution; it's due to a reduction in "graininess" (and possibly better hiding of head defects) from using smaller drop volumes, which requires you to use more dots in a given space to achieve the same ink density.
To meaningfully compare printers, you need at the very least to know the volume of ink in a drop (for inkjet heads as of 2006, this can vary from about 1 to 80 picolitres), including what "subdrop" volumes are possible in the case of variable-dot heads, as well as the real dpi figure in each direction. The overall quality will also depend on halftoning algorithms, the gamut of the inks used, color management, positioning accuracy of the printer mechanism and any encoders, head defects and how well the print mode hides them, etc. The intended application is also significant: to give an extreme example, there's no point in achieving "photographic" resolution in a printer that will be used to print billboards -- although color gamut would still be very important for the latter.
DavidHopwood 00:16, 5 June 2006 (UTC) (working for, but not speaking for, a printer manufacturer)
"There are some ongoing efforts to abandon the dpi in favor of the dot size given in micrometres (µm). This is however hindered by leading companies located in the USA, one of the few remaining countries to not use the metric system exclusively."
I wouldn't blame US companies for this, even though I'm an enthusiastic S.I. advocate. Software interfaces to RIP packages and driver APIs require dpi, and there's no compelling reason to change them. Despite this, it is possible for a printer controller implementation to be internally almost S.I.-only. DavidHopwood 01:20, 5 June 2006 (UTC)
This is pretty trivial, but should the case (DPI or dpi) be standardized in this article? The last section uses dpi while the others use DPI. -- MatthewBChambers 09:13, 2 October 2007 (UTC)
Both of the external links are low quality links to pages by writers with an ideological axe to grind and a limitied understanding of the topic. —Preceding unsigned comment added by 24.128.156.64 ( talk • contribs)
This sentence "Therefore it is meaningless to say that a digitally stored image has a resolution of 72 DPI." is just simply, clearly unequivocally false. It is also misleading in a way that exacerbates existing confusion among users. —Preceding unsigned comment added by 24.128.156.64 ( talk) 16:44, 12 October 2007 (UTC)
Hmm, there seems to be some disagreement here about the origin of the term DPI. The position of the article is that it has its origins in printers, while 24.128.156.64 says that it had its origins in digital file formats. Does anyone have a reference to support either position? Personally, I think it's the printers. Rocketmagnet 17:10, 12 October 2007 (UTC)
I don't think it is only an issue of origin. It is, most importantly, an issue an issue of use. Users of graphics software get confused on this issue as it is. To deny the fact that all professional graphics software and most amateur graphics software allows the editing of a value called DPI which gets stored with the file adds to that confusion. Here is an example of a page using DPI correctly: http://msdn2.microsoft.com/en-us/library/ms838191.aspx—Preceding unsigned comment added by 24.128.156.64 ( talk) 17:23, 12 October 2007 (UTC)
~~~~
. :) --
jacobolus
(t)
18:07, 13 October 2007 (UTC)24.128.156.64 22:45, 13 October 2007 (UTC)
just my 2 cents on the words on top: the number of pixels is an hard coded value in any pixel format. the dpi value is sort of a scratch parameter in common image formats. people doing layouts and then prints have first to decide what width and height (measured in cm or inch) they want to select for the given image. only after that decision you can do the math and divide the number of pixels by the number of inches... some folks prefer talking in the news paper typical columns, e.g. 3 columns width - and thats a fixed length for a specific type of paper. -- Alexander.stohr ( talk) 16:18, 5 August 2010 (UTC)
Can editors who understand these things (I don't) add explanations to the article that make the kind of jargon typically found in printer specs understandable to laypeople?
Examples (taken from HP and Brother):
-- Lambiam 08:43, 24 April 2009 (UTC)
--ADDED QUERY (2009-04-30): At the start of this article, whose length I've yet to scrutinize, comes an example that sadly I find far from enlightening--to wit:
| An example of misuse would be if an LCD monitor manufacturer claimed that | a 320x240 pixel 3" monitor (2.4"x1.8") actually had a resolution of 400 DPI, | (three times the pixels per inch).
[NB: delete this last comma--NOT wanted before parenthesis, delimited by parens.]
PLEASE "show your work": i.e., what is being multiplied/divided-into what? E.g., I multiply 320x240 = 76_800, and 2.4x1.8 = 4.32 and think that density should result from 76_800 / 4.32 (= 17_777.77...), but that's not close to 400! ?? Dividing 76_800 by "400" gets me a 192 which I can't figure how to map ... . .:. It would be helpful, at this introductory point, to show the calculation! Thanks. (-; 216.194.229.45 ( talk) 13:18, 30 April 2009 (UTC)
quote from the article: Software programs render images to the virtual screen and then the operating system renders the virtual screen onto the physical screen; with a logical PPI of 96 PPI, older programs can still run properly regardless of the PPI provided by the physical screen. Useability and readability is heavily influenced by the technology (laser beamer, flat screen with/without contrast enhancements, cathode ray tube), by the viewers distance, the viewers individual vision capabilities and by some "crap" the operating system does, e.g. anti-aliasing fonts. even environment (lit, dark, foggy, reflections, ...) might play a big role every now and then. 99.5% of all computer progams will run on any screen with any PPI value - they just dont care about it. there are pretty few programs that need to fullfill any exact measures but rather sometimes the application offers a setup e.g. for the used font in editors and terminals. furthermore modern operating system offers lots of tuning for e.g. border widths, menu fonts, window decoration and so on. if someone wants to use an 8x8 font he can, but he can use an 14x16 font as well - the user is adapting to what he wants to see. if you connect a 12" tube or a 40" flat screen the operating system will rather respond with a desktop having more or less width than using the PPI value for adjusting to the changed conditions. BTW for much of legacy applications (e.g. a window of a C64 emulator) there are even zoom modes and nearly all modern consumer flat screens have even built in zoom even if folks like to use such devices in a 1:1 pixel match mode - forget about PPI and fix that statement in the article. -- Alexander.stohr ( talk) 16:27, 5 August 2010 (UTC)
Are the common monitor "pitches" given in terms of "dot pitch" or "dot trio pitch"? Mfwitten ( talk) 22:22, 1 October 2011 (UTC)
Just my luck to stumble, when looking for some clarification, on this article - one of those where the discussion is much longer then the article itself :-) There are lots of sentences in the article (and in the discussion) which I don't understand; others which seem confusing. But then again, I am far from a specialist in this field. Yet, maybe the following is helpful, if only to bring out the points of disagreement.
1. I think that in this article (and in the discussion) two objectives are intertwined, leading to confusion. I.m.o. an encyclopedia should a) describe the (various) common uses of a term; b) offer explanations and background knowledge. Therefore, I wholeheartedly agree that terms like 'wrong' or 'misuse of the word' or even 'misleading' should be avoided. However, an encyclopedia is not a dictionary; it should go one step further and EXPLAIN certain things. In fact, the various 'definitions' of dpi and ppi cannot even be comprehended without some background knowledge, and so would by themselves be of no use to any reader. Any explanation requires a strict definition of the terms used. Quite apart from any claim to 'correctness', when the explanation does not make a clear choice of words it becomes incomprehensible (which, i.m.o., it is right now). Both requirements need not be contradictory; one could very well describe the various ways in which a term is used, and nevertheless, when it comes to an explanation, use the terms in one specific well-defined sense.
2. So, what 'definitions' of DPI and PPI and other terms will the explanation use? In the discussion, it is not at all clear to me when the disagreement is about the use of a word and when it is about facts. Surely, these two kinds of disagreements should be separated as well as possible. In my opinion, following a historical line may be the most helpful to further comprehension.
- As far as I know the term DPI was first used for digital phototypesetting machines, like the Digiset (VideoComp in the USA), introduced in 1966. (I'm not sure about this, it would require verification) In the '80's, a digitital phototypesetter could achieve resolutions up to 3000 dpi. (or was it 4000?) DPI, then, describes the number of circular black dots, of varying size up till completely overlapping, per inch of paper. (The density of coloured dots was, at the time, described by different units.) When, for a colour printer, just 1 figure is given for the "DPI", it pertains to the number of black dots per inch and for a very good reason: to make it comparable to the DPI of a bl/w laserprinter, still widely in use.
I would propose to use the term "DPI", in the explanatory part of the article, in this sense only.
When two figures are given, like in "1200 x 4800 color dpi", I agree with Wapcaplet that the figures refer to black dots (used to print text and comparable to the figure for bl/w printers) and 4-colour dots (to print coloured pictures) respectively. This use of 'DPI' seems to me a quite reasonable 'extension' of the term DPI, to adapt it to the colour-print-era. (And frankly, I don't understand the objections raised by 76.126.134.152) The question remains, however, what to make of printer specifications where the second number does not equal 4x the first number (or vice versa, since there seems to be no rule on the order of the two numbers). Does anyone know what is meant by a ¨printer specification like "1200x2400 dpi"? I suspect it means that text is printed at 1200 dpi and colour pictures are printed at 600 dpi (that is: 600 dots of one colour per inch). Finally, I'd like to know what the 'x' between the two figures means; it suggests something like area, like 'horizontal and vertical'. If what I suppose here is true however, it has no meaning whatsoever and could just as well have been a dot or a comma or a semi-colon. This should be pointed out to the reader.
3. The term 'pixel' (and the related 'PPI') orginates in an entirely different field: the 'digitisation' of pictures. To make up a digital picture from an image, the image is overlayed with a grid of (theoretically) squares and for each square an everage colour and luminosity is determined, either by a scanner or a camera. The resulting 'bitmap', containing values for colour and luminosity of each pixel, is then saved in a picture file. I would propose to use the term 'pixel' in the explanatory part of the article, in this sense only.
Many file-formats (but I don't know which ones exactly) give the option of writing a value for PPI in the meta-section of the file, thus making it possible to determine the real size of the scanned original. Evidently, this value has no meaning for a picture of a landscape taken with a digital camera. (To my knowledge, camera's do not save this value when saving in, for instance, jpeg, but most scanning software does fill this value.)
4. The term PPI is, I think, not as clear as DPI. For scanners, the meaning would seem quite clear to me: it describes the size of the grid used to scan the picture. PPI, used in that sense, is a useful measure for the 'resolution' of a scanner. In practice, however, I rarely see PPI in the specs of scanners; manufacturers seem to prefer the more widely known term 'DPI' instead, and it is, I think, anybody's guess what they mean by that. Some may simply use 'DPI' when they mean 'PPI' (thus giving useful information on the quality of the scanner); others may use 'DPI' to simply refer to the measurements of the scanned picture as layed down in the meta-section of the resulting JPG or TIFF file, which has nothing whatsoever to do with the quality of the scan. (In practice, I met both)
For camera's, the use of the term 'PPI' seems less common, and for good reason: it is not clear what it refers to. When used, however, PPI does indeed seem to refer to the size of the sensor: given a total nr. of pixels, the bigger the chip (and thus: the lower the PPI value), the better the quality.
I would propose to use the term PPI, in the explanatory section of the article, primarily to denote the resolution of a scanner. (The meaning when used in reference to camera's is somewhat confusing, as a lower figure is associated with better quality, which is quite the reverse from it's meaning when used with respect to scanners.)
5. - The concept of dpi could conceivably be extended to monitors, as these, too, are output-devices and the original CRT's used coloured dots. However, the meaning is not as clear, as a monitor does not produce 'black dots'. The term 'dot-pitch' has always been more popular for monitors. (Although I don't know whether this referred to the distance between individual dots, or between the dots of one colour, or between groups of 3 coloured dots.) In fact, most modern lcd monitors do not produce dots at all; instead, they work with squares built-up from 3 coloured stripes, usually denoted as 'pixels'. (I'd propose to use the term screen-pixels to avoid confusion.) Most often, the resolution is described as "X*Y pixels", while the physical size is described by the length of a diagonal across the screen.
Thus, for most people the terms DPI and PPI in connection with a monitor doesn't add much, except confusion. In graphic design, though, you may want to make the size on-screen exactly the same as the size in print. In practice, you need to know the PPI of your screen to achieve this. One can calculate it as described in the article or simply take the vertical no. of screen-pixels and divide it by the screen-height in inches.
5. Some remarks
- Digital printers produce dots in a certain colour. These dots can vary in size, and are thus arranged that they they can overlap, ultimately, when fully overlapping, producing black. (I'd put an asterisk here, as this is very subtle stuff, concerning the way our eyes perceive colours etc., but I think this description may do in this context.) Screens, on the other hand, from the CRT'S of old to modern LCD or plasma, do NOT vary the dot-size, but they CAN vary the lumininosity of the dots (or stripes or whatever). The dots can be circular, but some printers can produce elliptical or even semi-square dots; the printing-software uses this option to produce the best-looking output. I quote from http://www.prepressure.com/printing-dictionary/d "The dot shape is varied to minimize the dot gain at the point where dots join one another. Elliptical dots minimize the sudden dot gain where corners of dots connect; they may connect in their short direction at 40% dot area and in their long direction at 60% dot area. Round dots, often used for newsprint, may not connect until 70% dot area." (Here, I'm out of my depth again: hopefully someone knows more about this. Although, on the other hand, it doesn't seem very relevant for this article.)
- It should be noted (and I sorely miss this in the article) that the term DPI when referring to dots of one colour (notably black) is still highly relevant when used to describe a printer. Text, as opposed to pictures, is often, if not always, delivered to the "printing engine" as a vector format, which is translated to a dot pattern according to the specifications of the printer. In other words: a printer with a higher dpi specification will give you crisper text on your print. The same is of course true for pictures delivered to the 'printing engine' in vector format. (Ensuring that a vector drawing does indeed profit from the maximum dpi of the printer seems to me an art in itself when you're not using Adobe software. Yet, this seems to me beyond the scope of this article)
- Obviously, to print a bitmap (that is: a file describing pixels and resulting from a scanner or camera), the pixel pattern has to be 'translated' to the dot-pattern of the printer. A "1:1 print" simply translates each pixel into the appropriate sizes of the dots in one 'colour group' (which may consist of 4, 5, 7 or even 12 different colours - but that's another article) Obviously, the better the printer, the smaller, and crisper, the "1:1" print. When enlarging a picture in print beyond "1:1", one pixel is spread over a number of groups of dots. Thus, the picture gets vaguer, up till the point where the individual pixels become visible. When reducing the size below "1:1", more then one pixel is available to determine the dot-sizes for the printer, resulting in a print as good as the printer can achieve. (Theoretically, one would expect there to be certain favourable proportions, e.g. "4 pixels to one colour-group', which is, theoretically, much easier to render then "1.7 pixels to one colour-group). However, in practice the rendering software is very sophisticated and there seems to be hardly any gain using such simple proportions. (Here again, I'm out of my depth; but maybe this is way besides the scope of this article anyway)
5. My comments on the article In view of the above, I have a number of comments on and questions about this article.
- "The DPI value tends to correlate with image resolution, but is related only indirectly." This seems to me unclear, if only because 'correlation' is not a term many people understand. But apart from that: DPI and PPI are either synonymous or (as I propose to use the terms) they have no correlation whatsoever, not even 'indirectly'.
- The article starts by explaining 'monitor resolution', which is the most problematic use of the term dpi. Bad idea, I think.
- "A less misleading term, therefore, is pixels per inch." I don't see what is misleading and I don't see the 'therefore'.
- "the measurement of the distance between the centers of adjacent groups of three dots/rectangles/squares on the CRT screen." 1. To my knowledge, there are no CRT-screens with squares or rectangles. 2. To my knowledge, there are no LCD screens with 'groups of squares' 3. Is this true? In other words, could the text be amended by "the measurement of the distance between the centers of adjacent groups of three dots on an CRT-screen, or between the centers of two squares (each consisting of 3 coloured rectangles) on a LCD screen."?
- "DPI is used to describe the resolution number of dots per inch in a digital print and the printing resolution of a hard copy print dot gain; the increase in the size of the halftone dots during printing. This is caused by the spreading of ink on the surface of the media." This sentence forms the heart of the article in the sense that it defines the term that is the title of the article. Yet it contains so many unclarities, that I'll have to take them one-by-one: "DPI is used to describe the resolution number of dots per inch" Should this not be either "to describe the resolution" or "to describe the number of dots per inch". "and the printing resolution of a hard copy print dot gain" I fail to see why this should be added; I have no idea what the difference is between a 'digital print', as described in the first part of the sentence, and a "hard copy print" as described in this second part. I have no idea what a 'dot gain' is. "the increase in the size of the halftone dots during printing" I don't know how this is connected to the statement before the semi-colon; I don't know what to make of 'halftone dots',nor what dpi has to do with "spreading of ink on the surface of the media".
In summary: this definition is totally unclear to me.
- "Up to a point, printers with higher DPI produce clearer and more detailed output." Up to which point?
- "A printer does not necessarily have a single DPI measurement; it is dependent on print mode, which is usually influenced by driver settings." This seems clumsily put; printers always have a maximum dpi (and this is not a measurement but a value). What dpi is effectively used depends on user's choices. (notably choosing 'economy mode')
- "An inkjet printer sprays ink through tiny nozzles, and is typically capable of 300-600 DPI.[1] A laser printer applies toner through a controlled electrostatic charge, and may be in the range of 600 to 1,800 DPI." As the definition of dpi is unclear, so are these statements. Yet, quoting higher values for laserprinters then for ink-jet printers seems to me doubtful in whatever sense the words are taken.
- "The DPI measurement of a printer often needs to be considerably higher than the pixels per inch (PPI) measurement of a video display in order to produce similar-quality output. " Is this so? Why? Why is this Often? How often?
- "This is due to the limited range of colours for each dot typically available on a printer. " Yes, very limited indeed: just one.
- "At each dot position, the simplest type of colour printer can print no dot, or a dot consisting of a fixed volume of ink in each of four colour channels" This is, ithink, not true. Even allowing for the fact that the word 'dot' is used here (very confusingly) to denote a 'dot-group' consisting of dots in all colours the printer is capable of, this is still unnecessarily opaque; I wouldn't know what a 'colour channel' is, for instance. Nor can I see how 'the simplest type of colour printer' works differently, in this respect, from the 'most advanced type of colour printer'. Nor is there any 'ink' in my laser-printer. Finally, the 'fixed volume of ink' is to my knowledge simply untrue; the whole point of colour printing is that the size of the dots (and thus the volume of the ink applied in the case of an ink-jet) varies.
- "typically CMYK with cyan, magenta, yellow and black ink) or 2e4 = 16 colours" This, too, is to me incomprehensible. Prints are made with dots of varying size (not with a 'fixed volume of ink'). The principle of colour printing is partly based on the fact that ink-and toner colours, like normal paint, can produce mixed colours when they overlap and also when they are spaced apart - a phenomenon well known to painters, notably the impressionists. Our eyes being able to discern just 3 colours, all possible colours can be achieved by printing dots of varying size in 3 colours (in practice 4 or more are used). I have no idea why the number '16' would be relevant here, nor where the formula comes from. In fact, I have no idea why all this is relevant to the article.
- "Higher-end inkjet printers can offer 5, 6 or 7 ink colours giving 32, 64 or 128 possible tones per dot location." Incomprehensible - see above.
- "Contrast this to a standard sRGB monitor where each pixel produces 256 intensities of light in each of three channels (RGB)." I have no idea what it is I am supposed to be contrasting here, but it DOES throw up a point: surely, the variation in size of a printer dot comes in discrete steps. I have, however, never seen this in the specification of a printer. Does anyone know more about this?
- "While some colour printers can produce variable drop volumes at each dot position, " Apart from the fact that this is, again, ink-jet-talk and that it is not the volume produced, but the dot-size produced which is relevant, I would like to know what colour printer is NOT able to do this.
- "the number of colours is still typically less than on a monitor." Why would that be? I can't follow the explanation. Nor can I see the relevance with respect to the explanation of the term 'dots per inch'.
- "if a 100×100-pixel image is to be printed inside a one-inch square, the printer must be capable of 400 to 600 dots per inch in order to accurately reproduce the image." This, I think, is true. But: First, the term 'accurately' seems unnecessarily vague. Hereabove, I used prnting "1:1" to denote printing a picture with one pixel translating to one 'colour-group' on the printer. (why not be precise instead of talking about 'accurately' or 'faithfully' and such) Second, the explanation leading up to this fact makes it seems like it's rocket science, while in fact it's quite trivial: To print 100 pixels "1:1", the printer uses 100 colour groups; in case of a four-colour printer, this means 400 coloured dots. In case of a 6-colour printer, this means 600 dots.
- Section "DPI or PPI in digital image files" No factual quibbles here; just the observation that the wording is imprecise, which doesn't help i.m.o. to further comprehension, and that some descriptions use difficult words unnecessarily. For example: "Some digital file formats record a DPI value, or more commonly a PPI (pixels per inch) value". Comment: Formats don't record anything, but computer programs do. Some formats offer the possibility of recording PPI - I do not know of ANY format offering the option of recording DPI. MANY computer programs confuse dpi and ppi and represent the ppi-value as dpi. A PPI value in the file only has meaning when recorded by a scanner program; camera's often record some value (Nikon gives 300 ppi, Canon gives 180 ppi) but these values are entirely without meaning. "If it is labeled as 250 PPI, " What does 'labeled' mean? Let's be precise. "An image may also be resampled to change the number of pixels". Incomprehensible seen the level of the article. Moreover: what has this to do with the explanation of "Dots per inch"?
-Precise wording can, I think, help the understanding. For example, instead of: "Changing the PPI to 100 in an image editing program would tell the printer to print it at a size of 10×10 inches. However, changing the PPI value would not change the size of the image in pixels which would still be 1,000 × 1,000." I would say: "Changing the PPI-setting in the 'description' part of an image file (which can be done with an image editing program) would tell the printer to print it at a size of 10×10 inches. This, of course, does not change the the pixels in the image file: the picture still consists of 1.000 x 1.000 pixels.
- Section "Computer monitor DPI standards" I.m.o. this is a good piece, drawing attention to what is indeed a major source of confusion. But the first time I read it I couldn't make head nor tails of it. While reading, I was waiting to learn what 'the problem' is and what 'confusion' was sown by Microsofts choice. I didn't get that, and I still don't get it. Furthermore, it doesnt help that the use of terms is a bit 'loose', while some odd expressions ("a resolution of 1 megapixels", "the intended size of the image") may put an unsuspecting reader on the wrong foot (as it did me). Also, to introduce the word 'vector image' the first time in the article with the sentence "For vector images, there is no equivalent of resampling an image when it is resized" seems quite inadequate. (there's not even a reference to some other article here) The core of the matter is not at all hard to describe: like I said hereabove: to make the screen image the same size as printed output, you need to know the ppi value of your monitor, and instruct the software accordingly. (here, now: I said it in one sentence) Overrating the PPI of the monitor leads, at least in serious graphics applications, to a 'larger-then-life' picture on-screen.(This, by the way, could be explained in some more detail) However, I still don't see any 'problem' or 'source of confusion'.
I suspect (but is this true?) that a temporary problem has been that Apple software, being written for Apple monitors, did not allow for the the software being instructed this way: it simply supposed the monitor was 72 PPI, Apple's 'standard value'. However, by now all Apple graphics software can be so adjusted. (can't it?) Windows, of course, had no say whatsoever over the PPI of the monitor used, and thus windows software always went with the value used by the OS, which could be adjusted by the user in accordance with the specs of his monitor. (To appease the Apple-fans: the windows graphical software was, at the time, years behind Apple graphics software.)
All in all, the only possible 'sources of confusion' I can see are these: - Microsoft consistently uses DPI for PPI (where 'PPI' stands for 'screen-pixels per inch') and the dialogue box used to adjust the value, insists that one sets "The DPI value of the screen". This, of course, is nonsensical, since the "DPI value of the screen" is a fixed given, not a user's choice. I bet this wording has spread lots of confusion; it should have been something like "What is the PPI-value of your monitor?", with an explanation about how to determine this value, and the consequences of making the software believe it is higher or lower then it actually is. - Second possible source of confusion: the many, many articles on the web making a big fuss about 'Apple's 72 dpi" vs "Microsofts 96 dpi", where in fact all this is of interest to IT-historians only, or so it would seem to me.
In summary: I think this section, as it stands, is long and doesn't add to comprehending the term "Dots per inch". Propose to strike, or else to re-write in such a way that it deals better with common misunderstandings about "72 dpi" and "96 dpi". — Preceding unsigned comment added by Mabel2 ( talk • contribs) 17:20, 9 January 2012 (UTC)
Section "Computer monitor DPI standards" is confusing and wrong, better to remove it. — Preceding unsigned comment added by 2.34.179.118 ( talk) 10:46, 14 March 2013 (UTC)
...I don't know if it's just me being dumb, or what, but couldn't a whole load of the existing discussion on the article re: zoom levels and so on be replaced by a paraphrasing of "[...] meaning that on a Windows PC, to display the same piece of text with the same pixel resolution as on a Macintosh at '100%' zoom, on an otherwise identical display, the user would instead have to set a zoom of 75%".
You know, given that 1.3333r is the reciprocal of 0.75 and all... (ie, 1 divided by 4/3 is the same as 1 multiplied by 3/4...). The current version really seems to overcomplicate things, given that it's such a simple mathematical relationship with such a simple solution, should you need to obtain something closer to Mac-like output (well, you'd also need to adjust the H and V size pretty much to their minimum settings even on a relatively small PC monitor, and even considering the extra 128 H & 138 V pixels, given how tiny the original 128 and 512k monitors were).
The rest of the talk about how different monitors and video cards can produce output with variable real-life PPI, along with all the examples, as fascinating as they are for a techno-cruft historian, are pretty irrelevant overall, not to mention fairly self-evident to anyone who's seen more than one model of computer monitor in their lifetime. It may as well be a table of apparent PPI for different size SD and HD televisions.
Incidentally, the part in the lede about monitor pitch could probably also be cut down to "dot pitches from around 0.42mm to 0.22mm, decreasing over time, and mainly in the low 30s or high 20s", instead of the (still non-exhaustive) list of more than half a dozen very similar sizes starting at 0.39mm. Incidentally, I've owned a VGA monitor with a 0.42 pitch (coarse as hell, but it existed, was dirt cheap when bought, and was still sufficient, with barely any border on a 14" monitor with typical bezel thickness, for a reasonably clear 640x480 or 640x350 and a just-about-passable 720x400 thanks to the subpixel effects of mostly-monochrome text with tall pixels being displayed over multiple horizontally-offset triangular (hexagonal?) RGB clusters... 0.39 was more the practical max for actually-sharp textmode, and you needed a lower figure for SVGA (0.31? 0.34 for 15"?) or XGA (0.26 at 15", 0.31 at 17"?), but if you're mainly using Windows in VGA or playing games at 320x200 it's irrelevant), and I've reason to believe that wasn't even the worst, but I haven't much way of proving that. Also, it doesn't clarify that this only applied to colour monitors - the monochrome, single-phosphor, dotmask-less CRTs used in early Macs, for hi-rez output from the Atari ST and Amiga, EGA/VGA mono, MDA/Hercules, and in 15khz form as a cheap option for non-game/graphics use of various 8- and 16-bit home computers (including CGA PCs and PC Jrs) didn't HAVE any real concept of dot pitch, beyond the actual pixel output frequency, scanline count, and how tightly focused the electron beam was, all of which being somewhat arbitrary but typically giving a perceptually "clearer" and "sharper" appearance than the fuzzier-edged pixels of a typical colour TV or monitor. Certainly any suggestion of it being relevant to pre-colour Macs or to Hercules-equipped PCs is misleading. 193.63.174.254 ( talk) 17:17, 14 March 2017 (UTC)
There is a move discussion in progress on Talk:Kilometres per hour which affects this page. Please participate on that page and not in this talk page section. Thank you. — RMCD bot 00:59, 10 December 2013 (UTC)
I've produced this rather more accurate one, but I don't have a Wiki Commons account and really don't want to end up adding to my teetering stack of one-hit accounts for no good reason. Could someone grab this and upload it in my stead? Take the credit if you like :p ... just so long as you also resist the temptation to rename it "Blue Balls.png" (even though that's better than the current file's name...)
It's currently hosted at: http://imgur.com/xwmMaAr
Thanks! 193.63.174.211 ( talk) 11:07, 19 February 2014 (UTC)
I looked for this article to explain if "dots per inch" normally refers to a linear measurement (e.g. 100 horizontal pixels per horizontal inch) or to an area measurement (e.g. 100 pixels in a 1 inch by 1 inch square). Maybe confusion over this isn't very common, but the article never addresses this directly (I was able to figure out from some examples that it is linear). — Preceding unsigned comment added by 77.58.20.100 ( talk) 22:15, 4 January 2015 (UTC)
There is an odd line at the beginning of this section that is totally non sequitur:
For audio compression, see DPCM.
Huh? I think it should be removed. — Preceding unsigned comment added by Rwillfrd ( talk • contribs) 15:36, 8 May 2015 (UTC)
The article now says ( diff):
a 12-point font was represented with 12 pixels on a Macintosh, and 16 pixels (or a physical display height of maybe 19/72 of an inch) on a Windows platform at the same zoom
19/72 of an inch is 6.7 mm, that would imply the phisical pixel size of a display of 0.42 mm (6.7 mm divided by 16 pixels). That is the pixel size being larger than the typographic point (0.35 mm). But I never knew about such large pixel sizes today, I think such displays have not been being produced for the last 25 years at least ( some of the last).
Monitors for the last 20 years may have had such usual pixel sizes:
And so on. Most modern displays are aimed at 0.26 mm (96 DPI) pixel size or less.
I can only imagine some older small 14" monitors with the maximum resolution 640*480, that gives 0.45 mm. Or modern ≥36" LCD panels with no more than the 1920*1080 resolution. But I'm doubt that such large displays are used for ordinary work.
If we imagine the early days of MS Windows (the mid-1990s) with 15" monitors being most widespread (I believe), 16 pixels would be 6.08 mm (17/72") and 13 pixels 4.94 mm (14/72"). But everything depends on the actual pixel size of an actual monitor, so actual physical font sizes must vary greatly (if such a word can be applied to mm).-- Lüboslóv Yęzýkin ( talk) 17:54, 6 December 2015 (UTC)
I have removed a merge tag from Oct 2014 from Dots per inch#Computer monitor DPI standards. The merger proposal had no mirrored tag at the proposed target Pixel density, and this article already refers to Pixel density in a hatnote. This is without prejudice to any other editor proposing the merge again per WP:MERGE. Shhhnotsoloud ( talk) 15:56, 12 May 2017 (UTC)
the caption refers to a grid of 60x60 and then describes this as giving 36 points it seems obvious to me that the text was intended to read 6x6 not 60x60 so I have edited the text with that change. — Preceding unsigned comment added by 67.210.40.116 ( talk) 15:20, 9 June 2019 (UTC)
In my opinion those two units are used in such a way that a reader will believe that each of them can exist in both worlds e.a. "Dots per inch (DPI, or dpi[1]) is a measure of spatial printing, video or image scanner dot density".
DPI can not exist in "video or image scanner"
I haven't read the whole article (I'm a graphic worker here and at commons) but just wants to point out this so the article can be considered to be reviewed. --always ping me--
Goran tek-en (
talk)
11:55, 13 November 2021 (UTC)
@#### 103.174.45.11 ( talk) 11:51, 13 July 2022 (UTC)