tute2_ans(2)

pdf

School

Nanyang Technological University *

*We aren’t endorsed by this school

Course

MISC

Subject

Statistics

Date

Nov 24, 2024

Type

pdf

Pages

50

Uploaded by DukeLemur2884

Report
Regression and Classification SC4001 – Tutorial 2
࠵? = ࠵? ! , ࠵? " , ࠵? # ࠵? 0.09, −0.44, −0.15 −2.57 0.69, −0.99, −0.76 −2.97 0.34, 0.65, −0.73 0.96 0.15, , 0.78, −0.58 1.04 −0.63, −0.78, −0.56 −3.21 0.96, 0.62, −0.66 1.05 0.63, −0.45, −0.14 −2.39 0.88, 0.64, −0.33 0.66
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
initial weights and biases: ࠵? = 0.77 0.02 0.63 , ࠵? = 0.0 Learning factor ࠵? = 0.01 ࠵? ! ࠵? " ࠵? # ࠵? +1 ࠵? = ࠵? ࠵? = ࠵?
Given a training dataset ࠵? $ , ࠵? $ $%! & Set learning parameter ࠵? Initialize ࠵? and ࠵? Repeat until convergence: For every training pattern ࠵? $ , ࠵? $ : ࠵? $ = ࠵? $ ࠵? + ࠵? ࠵? ← ࠵? + ࠵? ࠵? $ − ࠵? $ ࠵? $ ࠵? ← ࠵? + ࠵? ࠵? $ − ࠵? $ (a) SGD for the linear neuron :
SGD: learning factor ࠵? = 0.01 Shuffle the inputs. Epoch 1 ࠵? = 1 Apply ࠵? $ = 0.34 0.65 −0.73 , ࠵? $ = 0.96 ࠵? $ = ࠵? ࠵? & ࠵? + ࠵? = 0.34 0.65 −0.73 0.77 0.02 0.63 + 0.0 = −0.19 ࠵?. ࠵?. = ࠵? $ − ࠵? $ " = 0.96 + 0.19 " = 1.31 ࠵? ← ࠵? + ࠵? ࠵? $ − ࠵? $ ࠵? ࠵? = 0.77 0.02 0.63 + 0.01× 0.96 + 0.19 0.34 0.65 −0.73 = 0.78 0.03 0.62 ࠵? ← ࠵? + ࠵? ࠵? $ − ࠵? $ = 0.0 + 0.01× 0.96 + 0.19 = 0.01
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
࠵? = 2 Apply ࠵? ࠵? = 0.63 −0.45 −0.14 , ࠵? $ = −2.39 ࠵? $ = ࠵? ࠵? & ࠵? + ࠵? = 0.63 −0.45 −0.14 0.78 0.03 0.63 + 0.01 = 0.4 ࠵?. ࠵?. = ࠵? $ − ࠵? $ " = −2.39 − 0.4 " = 7.78 ࠵? ← ࠵? + ࠵? ࠵? $ − ࠵? $ ࠵? ࠵? = 0.78 0.03 0.63 + 0.01× −2.39 − 0.4 0.63 −0.45 −0.14 = 0.76 0.04 0.63 ࠵? ← ࠵? + ࠵? ࠵? $ − ࠵? $ = 0.01 + 0.01× −2.39 − 0.4 = −0.02 Continue apply other patterns …. Then continue epochs 2, 3, …… until convergence.
Epoch 1 ࠵? ࠵? ࠵? s.e. ࠵? ࠵? 0.34 0.65 −0.73 0.96 -0.19 1.31 0.78 0.03 0.63 0.01 0.63 −0.45 −0.14 -2.39 0.4 7.78 0.76 0.04 0.63 -0.02 0.88 0.64 −0.33 0.66 0.47 0.04 0.76 0.04 0.63 -0.01 0.96 0.62 −0.66 1.05 0.33 0.52 0.77 0.05 0.62 -0.01 0.09 −0.44 −0.15 -2.57 -0.05 6.34 0.76 0.06 0.63 -0.03 0.69 −0.99 −0.76 -2.97 -0.04 8.59 0.74 0.09 0.65 -0.06 −0.63 −0.78 −0.56 -3.21 -0.96 5.05 0.76 0.10 0.66 -0.08 0.15 0.78 −0.58 1.04 -0.27 1.73 0.76 0.11 0.65 -0.07 m.s.e = 3.92
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
At convergence: ࠵? = 0.37 2.57 −0.21 , ࠵? = −1.17 mse = 0.054 If ࠵? = ࠵? ! , ࠵? " , ࠵? # , the learned function by the linear neuron: ࠵? = ࠵? ࠵? ࠵? + ࠵? ࠵? = 0.37 2.57 −0.21 ࠵? ! ࠵? " ࠵? # − 1.17 ࠵? = 0.37࠵? ! + 2.57࠵? " − 0.21࠵? # − 1.17
x: [-0.63 -0.78 -0.56], d: -3.21, y: -3.28035 x: [ 0.96 0.62 -0.66], d: 1.05, y: 0.919454 x: [ 0.09 -0.44 -0.15], d: -2.57, y: -2.22926 x: [ 0.88 0.64 -0.33], d: 0.66, y: 0.871378 x: [ 0.34 0.65 -0.73], d: 0.96, y: 0.782616 x: [ 0.15 0.78 -0.58], d: 1.04, y: 1.01435 x: [ 0.69 -0.99 -0.76], d: -2.97, y: -3.29005 x: [ 0.63 -0.45 -0.14], d: -2.39, y: -2.0579 Predicted values: ࠵? = 0.37࠵? ! + 2.57࠵? " − 0.21࠵? # − 1.17
(b) GD for a linear neuron Given a training dataset ࠵?, ࠵? Set the learning parameter ࠵? Initialize ࠵? and ࠵? Repeat until convergence: ࠵? = ࠵?࠵? + ࠵?࠵? & ࠵? ← ࠵? + ࠵?࠵? ࠵? − ࠵? ࠵? ← ࠵? + ࠵?࠵? & ࠵? − ࠵?
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
࠵? = 0.09 −0.44 −0.15 0.69 −0.99 −0.76 0.34 0.65 −0.73 0.15 0.78 −0.58 −0.63 −0.78 −0.56 0.96 0.62 −0.66 0.63 −0.45 −0.14 0.88 0.64 −0.33 , ࠵? = −2.57 −2.97 0.96 1.04 −3.21 1.05 −2.39 0.66 GD for a linear neuron initial weights and biases: ࠵? = 0.77 0.02 0.63 , ࠵? = 0.0 Learning factor ࠵? = 0.01
Output ࠵? = ࠵?࠵? + ࠵?࠵? = 0.09 −0.44 −0.15 0.69 −0.99 −0.76 0.34 0.65 −0.73 0.15 0.78 −0.58 −0.63 −0.78 −0.56 0.96 0.62 −0.66 0.63 −0.45 −0.14 0.88 0.64 −0.33 0.77 0.02 0.63 + 0.0 1 1 1 1 1 1 1 1 = −0.03 0.03 −0.18 −0.24 −0.85 0.34 0.39 0.48 m.s.e. = ! ( $)! ( ࠵? $ − ࠵? $ " = 1 8 −2.57 + 0.03 " + −2.97 − 0.03 " + ⋯ ⋯ + 0.66 − 0.48 " = 4.02 ࠵? = −2.57 −2.97 0.96 1.04 −3.21 1.05 −2.39 0.66
࠵? = ࠵? + ࠵?࠵? & ࠵? − ࠵? = 0.77 0.02 0.63 + 0.01× 0.09 0.69 0.34 0.15 −0.63 0.96 0.63 0.88 −0.44 −0.99 0.65 0.78 −0.78 0.62 −0.45 0.64 −0.15 −0.76 −0.73 −0.58 −0.56 −0.66 −0.14 −0.33 −2.57 −2.97 0.96 1.04 −3.21 1.05 −2.39 0.66 −0.03 0.03 −0.18 −0.24 −0.85 0.34 0.39 0.48 = 0.76 0.11 0.65 ࠵? = ࠵? + ࠵?࠵? & ࠵? − ࠵? = 0.0 + 0.01× 1 1 1 1 1 1 1 1 −2.57 −2.97 0.96 1.04 −3.21 1.05 −2.39 0.66 −0.03 0.03 −0.18 −0.24 −0.85 0.34 0.39 0.48 = −0.07
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
epoch ࠵? mse ࠵? ࠵? 2 −0.15 −0.16 −0.22 −0.25 −1.01 0.29 0.26 0.45 3.66 0.75 0.21 0.67 - 0.14 200 −2.23 −3.29 0.78 1.01 −3.28 0.92 −2.06 0.87 0.05 0.368 2.56 −0.21 - 1.164
At convergence: ࠵? = 0.37 2.57 −0.21 , ࠵? = −1.16 mse = 0.054 If ࠵? = ࠵? ! , ࠵? " , ࠵? # , the learned function by the linear neuron: ࠵? = ࠵? ࠵? ࠵? + ࠵? ࠵? = 0.37 2.57 −0.21 ࠵? ! ࠵? " ࠵? # − 1.16 ࠵? = 0.37࠵? ! + 2.57࠵? " − 0.21࠵? # − 1.16
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
࠵? ࠵? ࠵? (SGD) ࠵? (GD) 0.09, −0.44, −0.15 −2.57 −2.23 −2.23 0.69, −0.99, −0.76 −2.97 −3.29 −3.29 0.34, 0.65, −0.73 0.96 0.78 0.78 0.15, , 0.78, −0.58 1.04 1.01 1.01 −0.63, −0.78, −0.56 −3.21 −3.28 −3.28 0.96, 0.62, −0.66 1.05 0.92 0.92 0.63, −0.45, −0.14 −2.39 −2.06 −2.06 0.88, 0.64, −0.33 0.66 0.87 0.88 m.s.e. 0.055 0.054 ࠵? 0.369 2.566 −0.212 0.368 2.567 −0.207 ࠵? −1.165 - 1.163 ࠵? 0.37࠵? ! + 2.57࠵? " − 0.21࠵? # − 1.1 0.37࠵? ! + 2.57࠵? " − 0.21࠵? # − 1.16 After learning …
࠵? ! = 5 1 , ࠵? " = 7 3 , ࠵? # = 3 2 , ࠵? 9 = 5 4 class 1 ࠵? : = 0 0 , ࠵? ; = −1 −3 , ࠵? < = −2 3 , ࠵? = = −3 0 class 2 Center of class-1: ࠵? ! = ! * ࠵? ! + ࠵? " + ࠵? # + ࠵? * = 5.0 2.5 For class-2: ࠵? " = ! * ࠵? + + ࠵? , + ࠵? - + ࠵? ( = −1.5 0.0 The two classes are linearly separable!
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
The vector connecting two centroids = ࠵? ! − ࠵? " = 6.5 2.5 (1) The middle point connecting two centroids = ! " ࠵? ! + ࠵? " = ! " 5.0 2.5 + −1.5 0.0 = 1.75 1.25 If any point ࠵? ! ࠵? " on the boundary line, the vector connecting that point to the middle point = ࠵? ! ࠵? " 1.75 1.25 = ࠵? ! − 1.75 ࠵? " − 1.25 (2) Since the vectors (1) and (2) are normal, their inner product should be zero. 6.5 2.5 ࠵? ! − 1.75 ࠵? " − 1.25 = 0 6.5 ࠵? ! − 1.75 + 2.5 ࠵? " − 1.25 = 0 6.5࠵? ! + 2.5࠵? " − 14.5 = 0
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
+1 ࠵? ! ࠵? " ࠵? 6.5 2.5 -14.5 ࠵? = 6.5࠵? ! + 2.5࠵? " − 14.5 = 0 Perceptron: Weights: ࠵? = 6.5 2.5 Bias: ࠵? = −14.5
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
࠵? = 6.5࠵? ! + 2.5࠵? " − 14.5 ࠵? = ࠵? & ࠵? + ࠵? = 6.5 2.5 ࠵? ! ࠵? " − 14.5 ࠵? 5 1 7 3 3 2 5 4 0 0 −1 −3 −2 3 −3 0 ࠵? 20.5 38.5 10.0 28.0 -14.5 -28.5 -20.0 -34.0 class 1 1 1 1 2 2 2 2 ࠵? > 0.0 → ࠵?࠵?࠵?࠵?࠵? 1 ࠵? ≤ 0.0 → ࠵?࠵?࠵?࠵?࠵? 2 Training patterns Testing patterns: ࠵? = 4 2 , ࠵? = 16.5 > 0 → ࠵?࠵?࠵?࠵?࠵? 1 ࠵? = 0 5 , ࠵? = −2 ≤ 0 → ࠵?࠵?࠵?࠵?࠵? 2 ࠵? = 36/13 0 , ࠵? = 3.5 > 0 → ࠵?࠵?࠵?࠵?࠵? 1 Perfectly classified!
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
28 from sklearn.datasets import make_blobs # define dataset X, y = make_blobs (n_samples=100, n_features=3, centers=2, cluster_std=5, random_state=1) # print first few examples for i in range(5): print(X[i], y[i]) [ 5.82704597 -8.69737967 -14.86660705] 1 [ -5.95713961 6.15921976 -16.55912956] 0 [-7.16265579 10.13010842 -5.4897589 ] 0 [-5.19652244 -6.84653722 -9.28479932] 1 [ 0.9050892 2.91602569 -7.55512177] 0
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
# create a class for a logistic neuron class Logistic(): def __init__(self): self.w = torch.tensor(0.05*np.random.rand(3), dtype=torch.double, requires_grad= True ) self.b = torch.tensor(0., dtype=torch.double, requires_grad= True ) def __call__(self, x): u = torch.matmul(torch.tensor(x), self.w) + self.b logits = torch.sigmoid(u) return logits import torch import numpy as np
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
31 import torch.nn as nn loss = nn.BCELoss () # create a logistic neuron object model = Logistic() # one iteration of training def train(model, inputs, targets, learning_rate): logits = model(inputs) loss_ = loss(logits, torch.tensor(targets, dtype=torch.double)) err = torch.sum(torch.not_equal(logits > 0.5, torch.tensor(targets))) loss_. backward () with torch.no_grad(): model.w -= learning_rate * model.w.grad model.b -= learning_rate * model.b.grad model.w.grad = None model.b.grad = None return loss_, err
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
# training begins idx = np.arange(100) entropy, err = [], [] for epoch in range(100): np.random.shuffle(idx) entropy_, err_ = train (model, X[idx], y[idx], lr) entropy.append(entropy_.detach().numpy()), err.append(err_.detach().numpy())
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Learning curves
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
At convergence : w: [-0.07863051 -0.38637461 0.04367386], b: -0.0053 entropy: 0.30624, error: 14 Decision boundary: ࠵? = ࠵? ࠵? + ࠵? = 0 −0.079 −0.386 0.044 ࠵? ! ࠵? " ࠵? # − 0.005 = 0 −0.079࠵? ! − 0.386࠵? " + 0.044࠵? # + 0.005 = 0
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
࠵? = ࠵? ࠵?, ࠵? = 1.5 + 3.3࠵? − 2.5࠵? + 1.2࠵?࠵? for every 0 ≤ ࠵?, ࠵? ≤ 1 Training dataset was created by randomly sampling the input feature space and computing the corresponding labels.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
For perceptron , the activation function is a sigmoidal and the range of output should be known. Let: ࠵? = ࠵? ࠵?, ࠵? = 1.5 + 3.3࠵? − 2.5࠵? + 1.2࠵?࠵? where 0 ≤ ࠵?, ࠵? ≤ 1 ࠵? ࠵? +1 ࠵? Differentiating the function to find the maximum and minimum points: . ࠵?࠵? ࠵?࠵? = 3.3 + 1.2࠵? = 0 → ࠵? = −2.75 ࠵?࠵? ࠵?࠵? = −2.5 + 1.2࠵? = 0 → ࠵? = 2.08 That is, maximum/minimum occurs outside the given region. The maximum/minimum occur at boundary points for the given input region.
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
For the perceptron, activation function: ࠵? ࠵? = 5.8 1 + ࠵? ./ − 1.0 (࠵?, ࠵?) ࠵?, ࠵? ࠵?, ࠵? ࠵?, ࠵? ࠵?, ࠵? ࠵? 1.5 4.8 -1.0 3.5 Therefore, ࠵? ∈ −1.0, 4.8
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
# a class for the preceptron class Perceptron(): def __init__(self): self.w = torch.tensor(np.random.rand(2,1), dtype=torch.double, requires_grad=True) self.b = torch.tensor(0., dtype=torch.double, requires_grad=True) def __call__(self, x): u = torch.matmul(torch.tensor(x), self.w) + self.b y = 5.8*torch.sigmoid(u)-1.0 return y # mean squared error as the loss function def loss_fn(y_pred, d): return torch.mean(torch.square(y_pred - d)) # create a linear neuron model = Perceptron()
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
# training idx = np.arange(no_data) for epoch in range(no_epochs): np.random.shuffle(idx) XX, YY = X[idx], Y[idx] y_ = model(XX) loss_ = loss_fn(y_, torch.tensor(YY)) loss_.backward() with torch.no_grad(): model.w -= lr * model.w.grad model.b -= lr * model.b.grad model.w.grad = None model.b.grad = None
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
At convergence: ࠵? = 2.95 −1.91 , ࠵? = −0.48 mse = 0.014 If ࠵? = ࠵?, ࠵? ࠵? = ࠵? ࠵? ࠵? + ࠵? ࠵? = 2.95 −1.91 ࠵? ࠵? − 0.48 ࠵? = 2.95࠵? − 1.91࠵? − 0.48 The learned function by the perceptron: ࠵? = 5.8 1 + ࠵? >? − 1.0 ࠵? = 5.8 1 + ࠵? @.9=>".A:BC!.A!D − 1.0
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Linear neuron learns a linear function. The above equation can be written as a linear equation: ࠵? = 1.5 + 3.3࠵? − 2.5࠵? + 1.2࠵?࠵? ࠵? = 1.5 + 3.3࠵? ! − 2.5࠵? " + 1.2࠵? # where the linear neuron receives 3 inputs: ࠵? ! = ࠵? , ࠵? " = ࠵? , and ࠵? # = ࠵?࠵?. ࠵? ࠵? ࠵?࠵? ࠵? +1 ࠵? ࠵? = ࠵?
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
At convergence, Weights ࠵? = 3.40 −2.41 1.02 and bias ࠵? = 1.45 Mean square error = 1.8 x 10 -4 The function learned by the linear neuron: ࠵? = ࠵? ! ࠵? ! + ࠵? " ࠵? " + ࠵? # ࠵? # + ࠵? ࠵? = 3.40࠵? − 2.41࠵? + 1.02࠵?࠵? + 1.45
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
Linear Neuron Perceptron ࠵? 3.40 −2.41 1.02 2.95 −1.91 ࠵? 1.45 -0.48 m.s.e. 1. 8x 10 -4 0.014 function ࠵? = 3.4࠵? − 2.41࠵? + 1.02࠵?࠵? + 1.45 ࠵? = 5.8 1 + ࠵? 0.*(.".2+34!.2!5 − 1.0 ࠵? = 1.5 + 3.3࠵? − 2.5࠵? + 0.2࠵?࠵? for 0 ≤ ࠵?, ࠵? ≤ 1
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
࠵? = 1.5 + 3.3࠵? − 2.5࠵? + 0.2࠵?࠵? for 0 ≤ ࠵?, ࠵? ≤ 1 ࠵? = 3.4࠵? − 2.41࠵? + 1.02࠵?࠵? + 1.45 ࠵? = 5.8 1 + ࠵? !.#$%&.’()*+.’+, − 1.0
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help