Part 1. Review and run Assignment5.py several times, and explain the results. Are they what is desired? Why or why not? Note: I have checked this on several machines, but I can’t guarantee that you will always see the anomalies I’m trying to demonstrate. The desired answer is range(201,301). Part 2. One line of python code may translate to several instructions that occur on the CPU. Consider the self.l[self.iteration] = self.l[self.iteration] +1 statement. In it, at least 3 things are happening serially within a thread: read the current value of the vector element, calculate the new value of the element, and write the new value of the element. Why is this relevant? Part 3. Remove the comments around the lock.acquire(), lock.release(), and lock = threading.Lock() calls in Assignment5.py and run it several times. Explain the results. Are they what is desired? Why or why not? Part 4. This is a simple (and small) example of process coordination. “Real” applications and/or distributed systems can have 100s of threads and/or processes all doing different activities. How does this complexity impact the Software Systems Engineering of such systems?
Assignment 5.py:
#Assignment 5
import Queue
import threading
import time
import random
THREADS = 200
class addingThread (threading.Thread):
def __init__(self, l):
threading.Thread.__init__(self)
self.threadID = 3
self.name = "addingThread"
self.l = l
self.iteration = 0
def run(self):
#delay start by a random time between 0 and 0.1 sec
sleep_time= random.randint(1,1000)/10000.0
time.sleep(sleep_time)
while self.iteration < len(self.l):
#remove the comment for Question 1.3
#lock.acquire()
self.l[self.iteration] = self.l[self.iteration] +1
#remove the comment for Question 1.3
#lock.release()
self.iteration= self.iteration + 1
#remove the comment for Question 1.3
#lock = threading.Lock()
workList = range(1,101)
threads = []
# Create new threads
thread_num=0
while thread_num < THREADS:
thread = addingThread(workList)
threads.append(thread)
thread_num = thread_num +1
# Start threads
for t in threads:
t.start()
# Wait for all created threads to finish
for t in threads:
t.join()
#print final list
print "Final list: "
print workList
print "Exiting Main Thread"
Consider Assignment5.py. It starts with a
Part 1. Review and run Assignment5.py several times, and explain the results. Are they what is desired? Why or why not?
Note: I have checked this on several machines, but I can’t guarantee that you will always see the anomalies I’m trying to demonstrate. The desired answer is range(201,301).
Part 2. One line of python code may translate to several instructions that occur on the CPU. Consider the self.l[self.iteration] = self.l[self.iteration] +1 statement. In it, at least 3 things are happening serially within a thread: read the current value of the vector element, calculate the new value of the element, and write the new value of the element. Why is this relevant?
Part 3. Remove the comments around the lock.acquire(), lock.release(), and lock = threading.Lock() calls in Assignment5.py and run it several times. Explain the results. Are they what is desired? Why or why not?
Part 4. This is a simple (and small) example of process coordination. “Real” applications and/or distributed systems can have 100s of threads and/or processes all doing different activities. How does this complexity impact the Software Systems Engineering of such systems?
Trending now
This is a popular solution!
Step by step
Solved in 3 steps with 1 images