USE SIMPLE PYTHON CODE TO COMPLETE Basic version with two levels of threads (master and slaves) One master thread aggregates and sums the result of n slave-threads where each slavethread sums a different range of values in an array of 1000 random integers (please program to generate 1000 random integers to populate the array). The number of slave-threads is a parameter which the user can change. For example, if the user chooses 4 slave threads, each slave thread will sum 1000/4 = 250 numbers. If the user chooses 3 slave threads, the first two may each sum 333 numbers and the third slave thread sums the rest 334 numbers. 2) Advanced version with more than two levels of threads The master thread creates two slave-threads where each slave-thread is responsible to sum half segment of the array. Each slave thread will fork/spawn two new slave-threads where each new slave-thread sums half of the array segment received by its parent. Each slave thread will return the subtotal to its parent thread and the parent thread aggregates and returns the total to its parent thread. Start with 7 nodes thread tree, when you are comfortable, you can extend it to a full thread tree. *** HERE IS MULTI PROCESSING REPLACE WITH MULTI THREADING PART 1 AND PART 2 PLEASE *** BASIC VERSION PART 1 import multiprocessing import random def sum_range(arr, start, end): #Calculates the sum of a range of values in an array. return sum(arr[start:end]) def sum_array(arr, num_processes): #Sums the values in an array using multiple processes. pool = multiprocessing.Pool(num_processes) chunk_size = len(arr) // num_processes results = [] for i in range(num_processes): start = i * chunk_size end = start + chunk_size if i < num_processes - 1 else len(arr) result = pool.apply_async(sum_range, args=(arr, start, end)) results.append(result) total = sum(result.get() for result in results) pool.close() pool.join() return total if __name__ == '__main__': num_processes = 4 arr = [random.randint(1, 100) for _ in range(1000)] total = sum_array(arr, num_processes) print(f'Total: {total}') ***MORE ADVANCE VERSION PART 2 BELOW*** import multiprocessing import random def sum_range(arr, start, end): #Calculates the sum of a range of values in an array. return sum(arr[start:end]) def sum_array(arr, num_processes): #Sums the values in an array using a multi-level process tree. if num_processes == 1: return sum_range(arr, 0, len(arr)) else: pool = multiprocessing.Pool(num_processes) chunk_size = len(arr) // num_processes results = [] for i in range(num_processes): start = i * chunk_size end = start + chunk_size if i < num_processes - 1 else len(arr) result = pool.apply_async(sum_range, args=(arr, start, end)) results.append(result) subtotals = [result.get() for result in results] pool.close() pool.join() return sum_array(subtotals, num_processes // 2) if __name__ == '__main__': num_processes = 7 arr = [random.randint(1, 100) for _ in range(1000)] total = sum_array(arr, num_processes) print(f'Total: {total}')
USE SIMPLE PYTHON CODE TO COMPLETE
Basic version with two levels of threads (master and slaves)
One master thread aggregates and sums the result of n slave-threads where each slavethread sums a different range of values in an array of 1000 random integers (please
program to generate 1000 random integers to populate the array).
The number of slave-threads is a parameter which the user can change. For example, if the
user chooses 4 slave threads, each slave thread will sum 1000/4 = 250 numbers. If the user
chooses 3 slave threads, the first two may each sum 333 numbers and the third slave thread
sums the rest 334 numbers.
2) Advanced version with more than two levels of threads
The master thread creates two slave-threads where each slave-thread is responsible to sum
half segment of the array.
Each slave thread will fork/spawn two new slave-threads where each new slave-thread
sums half of the array segment received by its parent. Each slave thread will return the
subtotal to its parent thread and the parent thread aggregates and returns the total to its
parent thread. Start with 7 nodes thread tree, when you are comfortable, you can extend it
to a full thread tree.
*** HERE IS MULTI PROCESSING REPLACE WITH MULTI THREADING PART 1 AND PART 2 PLEASE ***
BASIC VERSION PART 1
import multiprocessing
import random
def sum_range(arr, start, end):
#Calculates the sum of a range of values in an array.
return sum(arr[start:end])
def sum_array(arr, num_processes):
#Sums the values in an array using multiple processes.
pool = multiprocessing.Pool(num_processes)
chunk_size = len(arr) // num_processes
results = []
for i in range(num_processes):
start = i * chunk_size
end = start + chunk_size if i < num_processes - 1 else len(arr)
result = pool.apply_async(sum_range, args=(arr, start, end))
results.append(result)
total = sum(result.get() for result in results)
pool.close()
pool.join()
return total
if __name__ == '__main__':
num_processes = 4
arr = [random.randint(1, 100) for _ in range(1000)]
total = sum_array(arr, num_processes)
print(f'Total: {total}')
***MORE ADVANCE VERSION PART 2 BELOW***
import multiprocessing
import random
def sum_range(arr, start, end):
#Calculates the sum of a range of values in an array.
return sum(arr[start:end])
def sum_array(arr, num_processes):
#Sums the values in an array using a multi-level process tree.
if num_processes == 1:
return sum_range(arr, 0, len(arr))
else:
pool = multiprocessing.Pool(num_processes)
chunk_size = len(arr) // num_processes
results = []
for i in range(num_processes):
start = i * chunk_size
end = start + chunk_size if i < num_processes - 1 else len(arr)
result = pool.apply_async(sum_range, args=(arr, start, end))
results.append(result)
subtotals = [result.get() for result in results]
pool.close()
pool.join()
return sum_array(subtotals, num_processes // 2)
if __name__ == '__main__':
num_processes = 7
arr = [random.randint(1, 100) for _ in range(1000)]
total = sum_array(arr, num_processes)
print(f'Total: {total}')
Trending now
This is a popular solution!
Step by step
Solved in 2 steps