PetaPixel article on limits of computational photography

PetaPixel article on limits of computational photography - Hallo friendsCAR ON REPIYU, In the article you read this time with the title PetaPixel article on limits of computational photography, We have prepared this article for you to read and retrieve information therein. Hopefully the contents of postings Article car review, Article mobile review, We write this you can understand. Alright, good read.

Title : PetaPixel article on limits of computational photography
link : PetaPixel article on limits of computational photography

Read too


PetaPixel article on limits of computational photography

Full article: https://petapixel.com/2023/02/04/the-limits-of-computational-photography/

Some excerpts below:

On the question of whether dedicated cameras are better than today's smartphone cameras the author argues:
“yes, dedicated cameras have some significant advantages”. Primarily, the relevant metric is what I call “photographic bandwidth” – the information-theoretic limit on the amount of optical data that can be absorbed by the camera under given photographic conditions (ambient light, exposure time, etc.).

Cell phone cameras only get a fraction of the photographic bandwidth that dedicated cameras get, mostly due to size constraints. 
 
There are various factors that enable a dedicated camera to capture more information about the scene:
  • Objective Lens Diameter
  • Optical Path Quality
  • Pixel Size and Sensor Depth
Computational photography algorithms try to correct the following types of errors:
  • “Injective” errors. Errors where photons end up in the “wrong” place on the sensor, but they don’t necessarily clobber each other. E.g. if our lens causes the red light to end up slightly further out from the center than it should, we can correct for that by moving red light closer to the center in the processed photograph. Some fraction of chromatic aberration is like this, and we can remove a bit of chromatic error by re-shaping the sampled red, green, and blue images. Lenses also tend to have geometric distortions which warp the image towards the edges – we can un-warp them in software. Computational photography can actually help a fair bit here.
  • “Informational” errors. Errors where we lose some information, but in a non-geometrically-complicated way. For example, lenses tend to exhibit vignetting effects, where the image is darker towards the edges of the lens. Computational photography can’t recover the information lost here, but it can help with basic touch-ups like brightening the darkened edges of the image.
  • “Non-injective” errors. Errors where photons actually end up clobbering pixels they shouldn’t, such as coma. Computational photography can try to fight errors like this using processes like deconvolution, but it tends to not work very well.
The author then goes on to criticize the practice of imposing too strong a "prior" in computational photography algorithms, so much that the camera might "just be guessing" what the image looks like with very little real information about the scene. 


Thus Article PetaPixel article on limits of computational photography

That's an article PetaPixel article on limits of computational photography This time, hopefully can give benefits to all of you. well, see you in posting other articles.

You are now reading the article PetaPixel article on limits of computational photography with the link address https://caronrepiyu.blogspot.com/2023/02/petapixel-article-on-limits-of.html

Subscribe to receive free email updates:

0 Response to "PetaPixel article on limits of computational photography"

Post a Comment