Module 1 Assignment #1 Solutions

pdf

School

Johns Hopkins University *

*We aren’t endorsed by this school

Course

EN.525.746

Subject

Electrical Engineering

Date

Apr 3, 2024

Type

pdf

Pages

13

Uploaded by JudgeExploration13550

Report
Module 1 Assignment #1 Solutions 525.746 Image Engineering Problem 1 (25 pts) a. Suppose that a flat area with center at (x0,y0) is illuminated by a light source with an illumination distribution: 𝑖(?, ?) = 𝐾𝑒 −[( ?−?0 4 ) 2 +( ?−?0 8 ) 2 ] If the reflectance characteristic of the area is: 20 ) 0 ( 10 ) 0 ( 10 ) , ( y y x x y x r What is the value of K that would yield an intensity of 100 at (x0,y0)? (Note that this “reflectance characteristic” is not strictly “reflectance” as we defined it in the lectures, as it can be greater than 1. Just ignore this lack of rigor in terminology for now and let the reflectance characteristic have values greater than one for this problem.) Solution: The image in question is given by f(x; y) = i(x; y)r(x; y) 𝑖(?, ?)𝑟(?, ?) = 100 = 𝐾𝑒 −[( ?−?0 4 ) 2 +( ?−?0 8 ) 2 ] ( 10 ( ? − ?0 ) + 10 ( ? − ?0 ) + 20 ) = 𝐾 ( 1 )( 0 + 0 + 20 ) = 20𝐾 K=5
b. Now let the reflectance be a constant value of 1, and let K=255. If the resulting image is digitized with n bits of intensity resolution, and the eye can detect an abrupt change of 8 shades of intensity between adjacent pixels, what is the value of n that will cause visible false contouring? Solution: The image in question is again given by f(x; y) = i(x; y)r(x; y) but now r(x,y) = 1 𝑖(?, ?) = 255𝑒 −[( ?−?0 4 ) 2 +( ?−?0 8 ) 2 ] The function looks like a two dimensional Gaussian in form, and a notional cross section of the image is shown below. 255 (x0,y0)
If the intensity is quantized using m bits, then we have the situation shown in the following figure, where: G = (255 + 1)/2 m Since an abrupt change of 8 gray levels is assumed to be detectable by the eye, it follows that G = 8 = 256/2 m , or m = 5 . In other words, 32, or fewer, gray levels will produce visible false contouring. 255 (x0,y0) (…) Equally spaced subdivisions G
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
c. Using Matlab or another programming language, graph the images in parts a and b for a 16 by 16 pixel image with (x0,y0)=(8.5,8.5). For part b, make three images with values of n of 8, 2, and your answer to part b. Solution: The images should look something like the following. I used the matlab script below the images to create and plot them. I also plotted the actual cross sections along the 8 th row. I think they do illustrate the false contouring at least a little, but the main point is to get started thinking of working with images as two dimensional mathematical functions.
Matlab script: %initial k value for first image k=5 xsize=16 ysize=16 x0=xsize/2 + 0.5 y0=ysize/2 + 0.5 illum=0; intens=0; reflec=0; rfacx=4; rfacy=8;
for ii=1:xsize for jj=1:ysize illum(ii,jj) = k*exp(-(((ii-x0)/rfacx)^2 + ((jj-y0)/rfacy)^2)); reflec(ii,jj) = 10*(ii-x0) + 10*(jj-y0) + 20; end end intens = illum.*reflec; figure; subplot(2,2,1) imagesc(intens); colormap( 'gray' ); colorbar; title( 'Part a' ); %change k value for other images k=255 for ii=1:xsize for jj=1:ysize illum(ii,jj) = k*exp(-(((ii-x0)/rfacx)^2 + ((jj-y0)/rfacy)^2)); reflec(ii,jj) = 1; end end intens2=0; nbits=2; intens2 = ((k+1)/(2^(nbits)))*floor(((2^(nbits))/k)*illum); subplot(2,2,2) imagesc(intens2); colormap( 'gray' ); colorbar; title( 'Part b, 2 bits' ); intens3=0;
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
nbits=5; intens3 = ((k+1)/(2^(nbits)))*floor(((2^(nbits))/k)*illum); subplot(2,2,3) imagesc(intens3); colormap( 'gray' ); colorbar; title( 'Part b, 5 bits' ); intens4=0; nbits=8; intens4 = ((k+1)/(2^(nbits)))*floor(((2^(nbits))/k)*illum); subplot(2,2,4) imagesc(intens4); colormap( 'gray' ); colorbar; title( 'Part b, 8 bits' ); figure; subplot(2,2,1) plot(illum(8,:)); title( 'illumination cross section' ) subplot(2,2,2) plot(intens2(8,:)); title( '2 bits' ) subplot(2,2,3) plot(intens3(8,:)); title( '5 bits' ) subplot(2,2,4) plot(intens4(8,:)); title( '8 bits' )
Problem 2 (25 pts) a. Though not shown on the EM spectrum figure in class, US Standard 60 Hz AC power is certainly an electromagnetic wave. What would be the wavelength, in meters, of an electromagnetic wave of this frequency? Solution: = c/ = 2.998 x 10 8 (m/s) /60 (1/s) = 4.99 x 10 6 m = 5000 km. b. At the far other end of the electromagnetic spectrum, what is the wavelength of a gamma ray photon used in Positron Emission Tomography, which has an energy of 511 x 10 3 electron volts? Solution: E = hc/ = hc/E = (4.14e-15 eVsec)(3e8 m/s)/5.11e5 eV = 2.43 x 10 -12 m
Problem 3 (25 pts) For each part a to e below, choose which one of the following types of processes best describes each operation, and provide a short (two to four sentences) explanation to support each answer. Note that you don’t have to descr ibe how to perform each process itself (we’ll get to that later in the course…)! Just describe which type of process it is. (1) point process (3) global process (2) local process (4) other a. I need to prove I’m not a rob ot by identifying whether an image contains a store front or not. Solution: This could possibly be considered a global process, as we are processing information from the entire picture to do this. However, it’s also true that the recognition process in question is something that is not at all easily described mathematically, and so the best answer is that it sh ould be considered an ‘other’ process on those grounds. b. The varnish coating on an old Rembrandt painting has yellowed over the years. I have a color photograph of this painting, which essentially shows me how any given color has changed due to the yellowing. What kind of process must I use to recover the original image? Solution: This is a point process. We are mapping a combination of the (R,G,B) values of each pixel in the old image to new (R,G,B) values for the same pixel in the new image, depending only on the color of the pixel in question, and independent of where the pixels occur in the image or what is around them. c. A gray scale photo has been overexposed and appears washed out and mostly white. I want to adjust the total dynamic range so that the dimmest regions appear black and only the brightest regions appear white. Solution: This is a point process. We are adjusting the intensity values in the old photo to some new set of intensity values in the new image, independent of where they occur in the image or what is around them. It is basically the gray scale version of the previous question.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
d. A gray scale image is corrupted by ‘salt and pepper’ noise, which consists of numerous single pixels that are either maximum intensity (white) or minimum intensity (black), randomly scattered around the image. A median filter is often used to remove very large local intensity spikes like this, and consists of replacing each pixel intensity value with the median of the intensities of the pixels in a small square around that pixel. Solution: This is a local process, as the value of each pixel in the resulting image depends upon the values of the pixels in a small local region in the original image. e. What if instead of a median filter as used in part d, we used a summation filter, which replaces each pixel intensity with the sum of the pixels in a small region around the pixel? Solution: This is again a local process, as the value of each pixel in the resulting image still depends upon the values of the pixels in a small local region in the original image. It doesn’t matter that the details of the process have changed.
Problem 4 (25 pts) Getting started with matlab for image engineering by extracting pixel values from an image. a. Write a matlab function V = pixelValue(f,r,c), where f is gray scale image and r and c are the row and column numbers of a pixel in f. The output V is the pixel intensity value f(r,c). If you are using a different image processing and mathematics program than matlab for the course, that is fine, just write the same function in the language of your choice. b. Test your function by reading the image girl.tif and obtaining the pixel intensity values at the origin (1,1) and at the middle of the image. Display the values on the screen and the image in a figure, and paste them into your solutions document. Solution: function V = pixelValue(f,r,c) % %V = pixelValue(f,r,c) outputs the value of an image pixel at the input %coordinates (r,c) %The value of V is the value of the pixel (r,c) V = f(r,c); end
%problem4b.m f = imread( 'girl.tif' ); %value at origin, which in matlab is (1,1) Vorigin = pixelValue(f,1,1); %value in center of image [M,N] = size(f); rmid = floor(M/2) + 1; cmid = floor(N/2) + 1; Vcenter = pixelValue(f,rmid,cmid); %display the values disp(Vorigin) disp(Vcenter) >> problem4b 128 111 >>
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help