ESSENTIALS OF COMPUTER ORGAN..-TEXT
ESSENTIALS OF COMPUTER ORGAN..-TEXT
4th Edition
ISBN: 9781284033144
Author: NULL
Publisher: JONES+BART
Expert Solution & Answer
Book Icon
Chapter 2, Problem 52E

a)

Explanation of Solution

IEEE-754 floating point double precision:

IEE-754 floating point double precision has 64 bits.

  • One bit for sign, 11 bits for exponent, and 52 bits for significant bits.

Storing “12.5” using IEEE-754 double precision:

Step 1: Converting decimal to binary number:

Step (i): Divide the given number into two parts, integer and the fractional part. Here, the integer part is “12” and the fractional part is “.5”.

Step (ii): Divide “12” by 2 till the quotient becomes 1. Simultaneously, note the remainder for every division operation.

Step (iii): Note the remainder from the bottom to top to get the binary equivalent.

Step (iv): Consider the fraction part “.5”. Multiply the fractional part “.5” by 2 and it continues till the fraction part reaches “0”.

0.5×2=1.0

Step (v): Note the integer part to get the final result.

Thus the binary equivalent for “12.5” is 1100.12

Step 2: Normalize the binary fraction number:

Now the given binary fraction number should be normalized. To normalize the value, move the decimal point either right or left so that only single digit will be left before the decimal point.

1100.11.1001×23

Step 3: Convert the exponent to 11 bit excess-1023:

To convert the exponent into 8-bit excess-1023 notation, the exponent value should be added with 127. After addition, it is converted into binary equivalent.

1023+3=1026

Converting 102610=100000000102

Step 4: Convert the significant to hidden bit:

To convert the significant to hidden bit the leftmost “1” should be removed.

1.10011001

Step 5: Framing the number “12.5” in 64 bit IEEE-754 double precision

Sign bit(1 bit)Exponent bit(11 bits)Significant bit(52)
0100000000101001000000000000000000000000000000000000000000000000

Thus, the number “12.5” in 64 bit IEEE-754 double precision is represented as “0100000000101001000000000000000000000000000000000000000000000000”.

b)

Explanation of Solution

Storing “-1.5” using IEEE-754 double precision:

Step 1: Converting decimal to binary number:

Step (i): Consider the fraction part “0.5”. Multiply the fractional part “.5” by 2 and it continues till the fraction part reaches “0”.

0.5×2=1.0

Step (ii): Note the integer part to get the final result.

Thus the binary equivalent for “1.5” is 1.1.

Step 2: Normalize the binary fraction number:

Now the given binary fraction number should be normalized. To normalize the value, move the decimal point either right or left so that only single digit will be left before the decimal point.

1.11.1×20

Step 3: Convert the exponent to 11 bit excess-1023:

To convert the exponent into 8-bit excess-127 notation, the exponent value should be added with 127. After addition, it is converted into binary equivalent.

1023+0=1023

Converting 102310=011111111102

Step 4: Convert the significant to hidden bit:

To convert the significant to hidden bit the leftmost “1” should be removed.

1.11

Step 5: Framing the number “-1.5” in 64 bit IEEE-754 double precision

Sign bit(1 bit)Exponent bit(11 bits)Significant bit(52)
1011111111111000000000000000000000000000000000000000000000000000

Thus, the number “-1.5” in 64 bit IEEE-754 double precision is represented as “1011111111111000000000000000000000000000000000000000000000000000”.

c)

Explanation of Solution

Storing “.75” using IEEE-754 double precision:

Step 1: Converting decimal to binary number:

Step (i): Consider the fraction part “.75”. Multiply the fractional part “.75” by 2 and it continues till the fraction part reaches “0”.

0.75×2=1.5010.50×2=1.001

Step (ii): Note the integer part to get the final result.

0.7510=0.112

Thus the binary equivalent for “.75” is 0.112.

Step 2: Normalize the binary fraction number:

Now the given binary fraction number should be normalized. To normalize the value, move the decimal point either right or left so that only single digit will be left before the decimal point.

.111.1×21

Step 3: Convert the exponent to 11 bit excess-1023:

To convert the exponent into 8-bit excess-127 notation, the exponent value should be added with 127. After addition, it is converted into binary equivalent.

1023+(1)=1022

Converting 102310=011111111102

Step 4: Convert the significant to hidden bit:

To convert the significant to hidden bit the leftmost “1” should be removed.

1.11

Step 5: Framing the number “.75” in 64 bit IEEE-754 single precision

Sign bit(1 bit)Exponent bit(11 bits)Significant bit(52)
00111111111010000000000000000000000

Thus, the number “.75” in 64 bit IEEE-754 double precision is represented as “10111111010000000000000000000000”.

d)

Explanation of Solution

Storing “26.625” using IEEE-754 double precision:

Step 1: Converting decimal to binary number:

Step (i): Divide the given number into two parts, integer and the fractional part. Here, the integer part is “26” and the fractional part is “.625”.

Step (ii): Divide “26” by 2 till the quotient becomes 1. Simultaneously, note the remainder for every division operation.

Step (iii): Note the remainder from the bottom to top to get the binary equivalent.

Step (iv): Consider the fraction part “.5”. Multiply the fractional part “.625” by 2 and it continues till the fraction part reaches “0”.

0.625×2=1.2510.25×2=0.5000.50×2=1.001

Step (v): Note the integer part to get the final result.

0.62510=0.1012

Thus the binary equivalent for “26.625” is 11010.1012.

Step 2: Normalize the binary fraction number:

Now the given binary fraction number should be normalized. To normalize the value, move the decimal point either right or left so that only single digit will be left before the decimal point.

11010.1011.1010101×24

Step 3: Convert the exponent to 11 bit excess-1023:

To convert the exponent into 8-bit excess-127 notation, the exponent value should be added with 127. After addition, it is converted into binary equivalent.

1023+4=1027

Converting 102710=100000000112

Step 4: Convert the significant to hidden bit:

To convert the significant to hidden bit the leftmost “1” should be removed.

1.10101011010101

Step 5: Framing the number “26.625” in 64 bit IEEE-754 double precision

Sign bit(1 bit)Exponent bit(11 bits)Significant bit(52)
010000000011101010100000000000000000000000000000000000000000000

Thus, the number “12.5” in 64 bit IEEE-754 double precision is represented as “010000000011101010100000000000000000000000000000000000000000000”.

Want to see more full solutions like this?

Subscribe now to access step-by-step solutions to millions of textbook problems written by subject matter experts!
Students have asked these similar questions
Why I need ?
Here are two diagrams. Make them very explicit, similar to Example Diagram 3 (the Architecture of MSCTNN). graph LR subgraph Teacher_Model_B [Teacher Model (Pretrained)] Input_Teacher_B[Input C (Complete Data)] --> Teacher_Encoder_B[Transformer Encoder T] Teacher_Encoder_B --> Teacher_Prediction_B[Teacher Prediction y_T] Teacher_Encoder_B --> Teacher_Features_B[Internal Features F_T] end subgraph Student_B_Model [Student Model B (Handles Missing Labels)] Input_Student_B[Input C (Complete Data)] --> Student_B_Encoder[Transformer Encoder E_B] Student_B_Encoder --> Student_B_Prediction[Student B Prediction y_B] end subgraph Knowledge_Distillation_B [Knowledge Distillation (Student B)] Teacher_Prediction_B -- Logits Distillation Loss (L_logits_B) --> Total_Loss_B Teacher_Features_B -- Feature Alignment Loss (L_feature_B) --> Total_Loss_B Partial_Labels_B[Partial Labels y_p] -- Prediction Loss (L_pred_B) --> Total_Loss_B Total_Loss_B -- Backpropagation -->…
Please provide me with the output  image of both of them . below are the diagrams code I have two diagram : first diagram code  graph LR subgraph Teacher Model (Pretrained) Input_Teacher[Input C (Complete Data)] --> Teacher_Encoder[Transformer Encoder T] Teacher_Encoder --> Teacher_Prediction[Teacher Prediction y_T] Teacher_Encoder --> Teacher_Features[Internal Features F_T] end subgraph Student_A_Model[Student Model A (Handles Missing Values)] Input_Student_A[Input M (Data with Missing Values)] --> Student_A_Encoder[Transformer Encoder E_A] Student_A_Encoder --> Student_A_Prediction[Student A Prediction y_A] Student_A_Encoder --> Student_A_Features[Student A Features F_A] end subgraph Knowledge_Distillation_A [Knowledge Distillation (Student A)] Teacher_Prediction -- Logits Distillation Loss (L_logits_A) --> Total_Loss_A Teacher_Features -- Feature Alignment Loss (L_feature_A) --> Total_Loss_A Ground_Truth_A[Ground Truth y_gt] -- Prediction Loss (L_pred_A)…

Chapter 2 Solutions

ESSENTIALS OF COMPUTER ORGAN..-TEXT

Ch. 2 - Prob. 7RETCCh. 2 - Prob. 8RETCCh. 2 - Prob. 9RETCCh. 2 - Prob. 10RETCCh. 2 - Prob. 11RETCCh. 2 - Prob. 12RETCCh. 2 - Prob. 13RETCCh. 2 - Prob. 14RETCCh. 2 - Prob. 15RETCCh. 2 - Prob. 16RETCCh. 2 - Prob. 17RETCCh. 2 - Prob. 18RETCCh. 2 - Prob. 19RETCCh. 2 - Prob. 20RETCCh. 2 - Prob. 21RETCCh. 2 - Prob. 22RETCCh. 2 - Prob. 23RETCCh. 2 - Prob. 24RETCCh. 2 - Prob. 25RETCCh. 2 - Prob. 26RETCCh. 2 - Prob. 27RETCCh. 2 - Prob. 28RETCCh. 2 - Prob. 29RETCCh. 2 - Prob. 30RETCCh. 2 - Prob. 31RETCCh. 2 - Prob. 32RETCCh. 2 - Prob. 33RETCCh. 2 - Prob. 34RETCCh. 2 - Prob. 1ECh. 2 - Prob. 2ECh. 2 - Prob. 3ECh. 2 - Prob. 4ECh. 2 - Prob. 5ECh. 2 - Prob. 6ECh. 2 - Prob. 7ECh. 2 - Prob. 8ECh. 2 - Prob. 9ECh. 2 - Prob. 10ECh. 2 - Prob. 11ECh. 2 - Prob. 12ECh. 2 - Prob. 13ECh. 2 - Prob. 14ECh. 2 - Prob. 15ECh. 2 - Prob. 16ECh. 2 - Prob. 17ECh. 2 - Prob. 18ECh. 2 - Prob. 19ECh. 2 - Prob. 20ECh. 2 - Prob. 21ECh. 2 - Prob. 22ECh. 2 - Prob. 23ECh. 2 - Prob. 24ECh. 2 - Prob. 25ECh. 2 - Prob. 26ECh. 2 - Prob. 27ECh. 2 - Prob. 29ECh. 2 - Prob. 30ECh. 2 - Prob. 31ECh. 2 - Prob. 32ECh. 2 - Prob. 33ECh. 2 - Prob. 34ECh. 2 - Prob. 35ECh. 2 - Prob. 36ECh. 2 - Prob. 37ECh. 2 - Prob. 38ECh. 2 - Prob. 39ECh. 2 - Prob. 40ECh. 2 - Prob. 41ECh. 2 - Prob. 42ECh. 2 - Prob. 43ECh. 2 - Prob. 44ECh. 2 - Prob. 45ECh. 2 - Prob. 46ECh. 2 - Prob. 47ECh. 2 - Prob. 48ECh. 2 - Prob. 49ECh. 2 - Prob. 50ECh. 2 - Prob. 51ECh. 2 - Prob. 52ECh. 2 - Prob. 53ECh. 2 - Prob. 54ECh. 2 - Prob. 55ECh. 2 - Prob. 56ECh. 2 - Prob. 57ECh. 2 - Prob. 58ECh. 2 - Prob. 59ECh. 2 - Prob. 60ECh. 2 - Prob. 61ECh. 2 - Prob. 62ECh. 2 - Prob. 63ECh. 2 - Prob. 64ECh. 2 - Prob. 65ECh. 2 - Prob. 66ECh. 2 - Prob. 67ECh. 2 - Prob. 68ECh. 2 - Prob. 69ECh. 2 - Prob. 70ECh. 2 - Prob. 71ECh. 2 - Prob. 72ECh. 2 - Prob. 73ECh. 2 - Prob. 74ECh. 2 - Prob. 75ECh. 2 - Prob. 76ECh. 2 - Prob. 77ECh. 2 - Prob. 78ECh. 2 - Prob. 79ECh. 2 - Prob. 80ECh. 2 - Prob. 81ECh. 2 - Prob. 82E
Knowledge Booster
Background pattern image
Similar questions
SEE MORE QUESTIONS
Recommended textbooks for you
Text book image
Database System Concepts
Computer Science
ISBN:9780078022159
Author:Abraham Silberschatz Professor, Henry F. Korth, S. Sudarshan
Publisher:McGraw-Hill Education
Text book image
Starting Out with Python (4th Edition)
Computer Science
ISBN:9780134444321
Author:Tony Gaddis
Publisher:PEARSON
Text book image
Digital Fundamentals (11th Edition)
Computer Science
ISBN:9780132737968
Author:Thomas L. Floyd
Publisher:PEARSON
Text book image
C How to Program (8th Edition)
Computer Science
ISBN:9780133976892
Author:Paul J. Deitel, Harvey Deitel
Publisher:PEARSON
Text book image
Database Systems: Design, Implementation, & Manag...
Computer Science
ISBN:9781337627900
Author:Carlos Coronel, Steven Morris
Publisher:Cengage Learning
Text book image
Programmable Logic Controllers
Computer Science
ISBN:9780073373843
Author:Frank D. Petruzella
Publisher:McGraw-Hill Education