Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read
Q&A

What is the difference between dot and pixel?

+4
−5

I've read many explanations, but either they all are too abstruse or they gainsay each other.

Why Dots Per Inch Isn't Pixels Per Inch

A dot refers to ink density, effectively; a pixel refers to image density on a screen.

Well what does "density" mean?

A quick PSA on "dots" versus "pixels" in LCDs | TechCrunch

It’s actually pretty simple: LCDs are made up of pixels, and pixels are made up of dots.

joojaa's answer is similar.

No, each pixel is represented by multiple dots*.

But Scott gainsaid all this.

There is absolutely zero correlation between pixels and dots. None.

Then I tried Alan Gilbertson's answer.

A pixel (the word was originally coined, iirc, by IBM and derives from "picture element") is the smallest indivisible unit of information in a digital image. Pixels may be displayed, or they may be printed, but you can't divide pixels into smaller pieces to get more information. How many channels and bits per channel make up one pixel is the measure of how subtle the information in a pixel may be, but the basic fact is that 1 pixel is the smallest increment of information in an image. If you do video, you know that pixels don't have to be square -- they are non-square in all older video formats. Square or not, a pixel is still the smallest unit of a picture.

The sentence colored in gray addled me. What are "channels" and "bits per channel"?

An inch (okay, so you know this already -- bear with me) is a unit of linear measurement on a surface, which could be a screen or a piece of paper.

A dot is, well, a dot. It can be a dot on a screen, or it can be a dot produced by a printhead. Like pixels, dots are atomic. They're either there, or they're not. How much fine detail a screen can display depends on how close the dots are (what they used to call "dot pitch" in the old CRT days). How small the dots are from an inkjet, a laser printer or an imagesetter determines how much fine detail it can reproduce.

What does "atomic" mean? I feel I need to know some physics to understand this answer!

Rafael's answer is the least abstruse, but it still refers to recondite terms like "the bit depth".

Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

0 comment threads

2 answers

+7
−0

Dots and pixels can be the same or different, depending on context. In a computer file, an image is usually made up of a rectangular array of color (or gray) values. We call those pixels, for "picture elements".

"Dots" is usually used when trying to realize an image on physical media. Dots can be very much like pixels, or not, depending on the medium.

For example, a die-sublimation printer can produce an arbitrary color at each location. That's just like a pixel. The physical characteristic of the printer might dictate a fixed dot pitch, like 600 DPI (dots per inch). In effect, this is the density of pixels used to show an image on the output medium.

Let's say, you have a 1024 x 768 image you want to print 8 inches wide on a 600 DPI die-sublimation printer. The printing software will automatically resize your image to 4800 pixels across, then print that with pixels matching dots directly. In this case, you can argue that dots and pixels are the same thing, since the 4800 pixel across internal image was printed.

It's not so straight forward with other output technologies. Consider an injet printer that can only either squirt a little bit of cyan, magenta, and/or yellow ink in any one place or not (we'll ignore the special case of using a forth ink to make black directly). This allows each dot to be one of only 8 colors, which is not what you want your photograph to look like. This printer provides the illusion of smooth color scales by using lots and lots of little dots. In that case, dots are much smaller than pixels, since it takes a number of dots to realize the information in a single full-color pixel.

It can get even more complicated. Some inkjet printers have some control over the size of the ink dots they produce. They generally can't vary them meaningfully to give continuous color scales on their own, but you do get more than 8 colors per dot. It takes fewer dots, and therefore less area, to reproduce the information in a single original pixel. The number of dots/pixel can be quite hard to pin down.

Specs are for monitors can be different. Old CRT color monitors had fixed dots that were either red, green, or blue. Their intensity could be arbitrarily controlled, but their locations and pitch were fixed. These RGB dots were arranged in a triangular pattern so that any clump of three adjacent dots forming a triangle were each of a different color. Such clumps were called "triads". Triads ended up in a hexagonal pattern. Neither the triads nor the individual color dots were pixels.

Modern LCD monitors have clumps of RGB splotches, but those are usually arranged in a rectangle. One RGB clump then can map to a single pixel. Such monitors inherently display a fixed-size image in terms of pixels, like 1920 x 1080 for HD TV.

It gets complicated, and the terms aren't always used consistently. The little rectangular array elements of an image in a computer file can pretty reliably be called "pixels", but everything else is less certain, and gets blurry in common usage.

The one thing you can rely on is that marketing specs will highlight whatever value sounds most impressive, regardless of how much real-world bearing that has on the resulting output quality.

Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

+5
−0

In either a print job or a digital picture at the smallest level there are individual color markers, in print that is called a dot, and on a computer, it's called a pixel. The difference between the two is whether the image is physical or digital.

On say a 4x6 inch picture, the more dots/pixels there are the better the quality will be.

Channels refer to colors/opacity, a 32bitmap image will have 4 channels Red, Blue, Green, and opacity while a grayscale image will only have one channel.

Bytes per pixel determine the amount of color. For instance, a 24-bit bitmap image has one 3 bytes per pixel, one byte per channel resulting in 256256256 possible color combinations. A grayscale bitmap on the other hand has one byte per pixel, resulting in 256 different combinations.

Atomic simply means that it's the smallest possible entity, it's not possible to split a pixel or dot any smaller.

Why does this post require moderator attention?
You might want to add some details to your flag.

0 comment threads

Sign up to answer this question »

This community is part of the Codidact network. We have other communities too — take a look!

You can also join us in chat!

Want to advertise this community? Use our templates!

Like what we're doing? Support us! Donate