hw_8_tf_updated-AartiPati
pdf
keyboard_arrow_up
School
Rice University *
*We aren’t endorsed by this school
Course
543
Subject
Electrical Engineering
Date
Feb 20, 2024
Type
Pages
15
Uploaded by BailiffWillpower14432
hw_8_tf_updated-AartiPati
November 17, 2023
1
Tensorflow Tutorial
Before doing the coding assignemnt for unit8, you probably need to get yourself familiar with
Tensorflow, a open source software library for numerial computation, particulary well suited and
fine-tuned for large scale machine learning.
The basic principle is you define your computation
graph and the tensorflow will take the graph and run it effciently on optimized c++ code.
1.1
Download the tensorflow package
if you are using anaconda, you first get into your environment with:
source activate env_name
and then download the tensorflow
conda install -c conda-forge tensorflow
this command will install a cpu version in your machine.
if you are not using anaconda, you may want to run this to download tensorflow:
pip install tensorflow
which will install the lastest version tensorflow.
For this tutorial we are using python 3.6; tensorflow version 2.4.1
[1]:
import
sys
import
tensorflow.compat.v1
as
tf
import
numpy
as
np
print
(tf
.
__version__)
tf
.
disable_eager_execution()
2.14.0
1
1.2
Creating And Running a Graph
Our first goal is to define a computation graph (computation_graph.png) in tensorflow and trigger
the computation.
Each node in the graph is called operation and each edge represents the flow
of the data.
The node can either operate on tensors (addition, subtraction, multiplication, etc)
or generate a tensor (constant and variable). Each node takes zero or more tensors as inputs and
produces a tensor as an output.
[2]:
x
=
tf
.
Variable(
3
, name
=
"x"
)
y
=
tf
.
Variable(
4
, name
=
"y"
)
two
=
tf
.
constant(
2
)
op1
=
tf
.
multiply(x, x)
op2
=
tf
.
multiply(x, op1)
op3
=
tf
.
add(y, two)
op4
=
tf
.
add(op2, op3)
Your operation will be built on a default graph since you didn’t specify tf.Graph() which we will
talk about later.
Once you define your operation, you can start a session and execute your graph.
[3]:
with
tf
.
Session()
as
sess:
# starts session, now we can evaluate
x
.
initializer
.
run()
# Inits vals for x to 3
y
.
initializer
.
run()
result
=
op4
.
eval()
# evaluates op 4
You initialize the variable in the graph and and trigger the computation by evaluating the last
operation. Since the op4 is dependent on op2 and op3, it will recursively call evaluation on op2
and op3 until it reaches the leaf node which is the variable and constant defined.
2
[4]:
result
[4]:
33
1.3
Managing the Graph
[5]:
def
reset_graph
(seed
=42
):
tf
.
reset_default_graph()
tf
.
set_random_seed(seed)
np
.
random
.
seed(seed)
[6]:
reset_graph()
You can create your own graphs and run them in sessions
[7]:
graph1
=
tf
.
Graph()
with
graph1
.
as_default():
x
=
np
.
random
.
rand(
100
)
.
astype(np
.
float32)
target
=
x
* 0.3 - 0.23
W
=
tf
.
Variable(tf
.
random_uniform([
1
],
-1.0
,
1.0
))
b
=
tf
.
Variable(tf
.
zeros([
1
]))
pred
=
W
*
x
+
b
loss
=
tf
.
reduce_mean(tf
.
square(target
-
pred))
print
(
'num of trainable variables =
%d
'
%
len
(tf
.
trainable_variables()))
print
(
'num of global variables =
%d
'
%
len
(tf
.
global_variables()))
print
(
'graph1='
, graph1)
print
(
'get default graph in current session = '
, tf
.
get_default_graph())
print
(
"*"
*100
)
print
(
'num of trainable variables =
%d
'
%
len
(tf
.
trainable_variables()))
print
(
'num of global variables =
%d
'
%
len
(tf
.
global_variables()))
print
(
'global default graph = '
, tf
.
get_default_graph())
print
(
'get default graph in current session = '
, tf
.
get_default_graph())
graph2
=
tf
.
Graph()
with
graph2
.
as_default():
x
=
np
.
random
.
rand(
100
)
.
astype(np
.
float32)
target
=
x
* 0.4 - 0.73
W
=
tf
.
Variable(tf
.
random_uniform([
1
],
-1.0
,
1.0
))
b
=
tf
.
Variable(tf
.
zeros([
1
]))
pred
=
W
*
x
+
b
loss
=
tf
.
reduce_mean(tf
.
square(target
-
pred))
print
(
"*"
*100
)
print
(
'num of trainable variables =
%d
'
%
len
(tf
.
trainable_variables()))
print
(
'num of global variables =
%d
'
%
len
(tf
.
global_variables()))
print
(
'graph2 = '
, graph2)
3
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
print
(
'get default graph in current session = '
, tf
.
get_default_graph())
num of trainable variables = 2
num of global variables = 2
graph1= <tensorflow.python.framework.ops.Graph object at 0x0000021BD05E0F70>
get default graph in current session =
<tensorflow.python.framework.ops.Graph
object at 0x0000021BD05E0F70>
********************************************************************************
********************
num of trainable variables = 0
num of global variables = 0
global default graph =
<tensorflow.python.framework.ops.Graph object at
0x0000021BC3B4C9D0>
get default graph in current session =
<tensorflow.python.framework.ops.Graph
object at 0x0000021BC3B4C9D0>
********************************************************************************
********************
num of trainable variables = 2
num of global variables = 2
graph2 =
<tensorflow.python.framework.ops.Graph object at 0x0000021BD05E1360>
get default graph in current session =
<tensorflow.python.framework.ops.Graph
object at 0x0000021BD05E1360>
[ ]:
1.4
Practice Create Graph with Tensorflow
Now it’s your turn to practice to define a computation graph in tensorflow (cross_entropy.png).
(NOTE : use placeholder to define variable instead of tf.Variable)
4
[8]:
# TODO :: define the cross entorpy computation graph in tensorflow; expect
␣
↪
10-15 lines of code (Requirement : create your own graph with tf.Graph an
␣
↪
run your graph;
# use placeholder to define variable instead of tf.Variable)
#f(y,p) = (1-y)*log(1-p)*y*log(p)*(-1)
#[(1-y)*log(1-p) + y*log(p) ] *(-1): Cross Entropy Function
graph
=
tf
.
Graph()
#mone = tf.constant(-1)
#node3 = (1 - node_y) * tf.math.log(1 - node_p) * node_p * tf.math.log(node_p)*
␣
↪
(-1)
#op1 = tf.subtract(1, node_y)
#op2 = tf.math.log(1-node_p)
#op3 = tf.multiply(op1,op2)
#op4 = tf.multiply(op3, node_p)
#op5 = tf.math.log(node_p)
#op6 = tf.multiply(op4,op5)
#op7 = tf.multiply(op6, mone)
with
graph
.
as_default():
node_p
=
tf
.
placeholder(tf
.
float32, name
=
'node_p'
)
node_y
=
tf
.
placeholder(tf
.
float32, name
=
'node_y'
)
cross_entropy
=
((
1 -
node_y)
*
tf
.
math
.
log(
1 -
node_p )
+
node_y
*
tf
.
math
.
↪
log(node_p ))
*
(
-1
)
with
tf
.
Session(graph
=
graph)
as
sess:
feed_dict
=
{node_p:
0.7
, node_y:
0.3
}
result
=
sess
.
run(cross_entropy, feed_dict
=
feed_dict)
print
(result)
0.94978344
5
1.5
Linear Regression
1.5.1
Using the Normal Equation
[9]:
import
numpy
as
np
from
sklearn.datasets
import
fetch_california_housing
reset_graph()
housing
=
fetch_california_housing()
m, n
=
housing
.
data
.
shape
housing_data_plus_bias
=
np
.
c_[np
.
ones((m,
1
)), housing
.
data]
X
=
tf
.
constant(housing_data_plus_bias, dtype
=
tf
.
float32, name
=
"X"
)
y
=
tf
.
constant(housing
.
target
.
reshape(
-1
,
1
), dtype
=
tf
.
float32, name
=
"y"
)
XT
=
tf
.
transpose(X)
# TODO :: write down the normal equation, for more detail of the normal
␣
↪
equation, you can refer to http://mlwiki.org/index.php/Normal_Equation
# hint : you may want to use tf.matrix_inverse, tf.matrix_inverse and tf.matmul
#theta = (XTX)-1 XT y
#Normal Equation
theta
=
tf
.
matmul(tf
.
matmul(tf
.
matrix_inverse(tf
.
matmul(XT, X)), XT),y)
with
tf
.
Session()
as
sess:
theta_value
=
theta
.
eval()
[10]:
theta_value
[10]:
array([[-3.6896515e+01],
[ 4.3682209e-01],
[ 9.4436919e-03],
[-1.0742678e-01],
[ 6.4522374e-01],
[-3.9421757e-06],
[-3.7879660e-03],
[-4.2084768e-01],
[-4.3402091e-01]], dtype=float32)
[11]:
X
=
housing_data_plus_bias
y
=
housing
.
target
.
reshape(
-1
,
1
)
# TODO :: implement the same normal equation with numpy
# hint : you may want to use np.linalg.inv
#Normal Equation: theta =(XTX)-1 XT y
XT
=
X
.
T
theta_numpy
=
np
.
linalg
.
inv(XT
.
dot(X))
.
dot(XT)
.
dot(y)
6
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
print
(theta_numpy)
[[-3.69419202e+01]
[ 4.36693293e-01]
[ 9.43577803e-03]
[-1.07322041e-01]
[ 6.45065694e-01]
[-3.97638942e-06]
[-3.78654265e-03]
[-4.21314378e-01]
[-4.34513755e-01]]
Compare with Scikit-Learn
[12]:
from
sklearn.linear_model
import
LinearRegression
# TODO :: define the linear regression model and fit the training data. the
␣
↪
model name should be lin_reg
lin_reg
=
LinearRegression()
lin_reg
.
fit(housing
.
data, housing
.
target)
#print(np.r_[lin_reg.intercept_.reshape(-1, 1), lin_reg.coef_.T])
#print(np.r_[(lin_reg.intercept_.reshape(-1, 1), lin_reg.coef_.T)])
print
(np
.
concatenate([lin_reg
.
intercept_
.
reshape(
-1
,
1
), lin_reg
.
coef_
.
↪
reshape(
-1
,
1
)], axis
=0
))
[[-3.69419202e+01]
[ 4.36693293e-01]
[ 9.43577803e-03]
[-1.07322041e-01]
[ 6.45065694e-01]
[-3.97638942e-06]
[-3.78654265e-03]
[-4.21314378e-01]
[-4.34513755e-01]]
1.6
Using Batch Gradient Descent
Gradient Descent requires scaling the feature vectors first. We could do this using TF, but let’s
just use Scikit-Learn for now.
[13]:
from
sklearn.preprocessing
import
StandardScaler
scaler
=
StandardScaler()
scaled_housing_data
=
scaler
.
fit_transform(housing
.
data)
scaled_housing_data_plus_bias
=
np
.
c_[np
.
ones((m,
1
)), scaled_housing_data]
7
[14]:
print
(scaled_housing_data_plus_bias
.
mean(axis
=0
))
print
(scaled_housing_data_plus_bias
.
mean(axis
=1
))
print
(scaled_housing_data_plus_bias
.
mean())
print
(scaled_housing_data_plus_bias
.
shape)
[ 1.00000000e+00
6.60969987e-17
5.50808322e-18
6.60969987e-17
-1.06030602e-16 -1.10161664e-17
3.44255201e-18 -1.07958431e-15
-8.52651283e-15]
[ 0.38915536
0.36424355
0.5116157
… -0.06612179 -0.06360587
0.01359031]
0.11111111111111005
(20640, 9)
[15]:
reset_graph()
n_epochs
= 1000
learning_rate
= 0.01
X
=
tf
.
constant(scaled_housing_data_plus_bias, dtype
=
tf
.
float32, name
=
"X"
)
y
=
tf
.
constant(housing
.
target
.
reshape(
-1
,
1
), dtype
=
tf
.
float32, name
=
"y"
)
theta
=
tf
.
Variable(tf
.
random_uniform([n
+ 1
,
1
],
-1.0
,
1.0
, seed
=42
),
␣
↪
name
=
"theta"
)
y_pred
=
tf
.
matmul(X, theta, name
=
"predictions"
)
error
=
y_pred
-
y
mse
=
tf
.
reduce_mean(tf
.
square(error), name
=
"mse"
)
gradients
= 2/
m
*
tf
.
matmul(tf
.
transpose(X), error)
training_op
=
tf
.
assign(theta, theta
-
learning_rate
*
gradients)
init
=
tf
.
global_variables_initializer()
with
tf
.
Session()
as
sess:
sess
.
run(init)
for
epoch
in
range
(n_epochs):
if
epoch
% 100 == 0
:
print
(
"Epoch"
, epoch,
"MSE ="
, mse
.
eval())
sess
.
run(training_op)
best_theta
=
theta
.
eval()
Epoch 0 MSE = 9.161542
Epoch 100 MSE = 0.71450037
Epoch 200 MSE = 0.56670487
Epoch 300 MSE = 0.55557173
Epoch 400 MSE = 0.54881126
Epoch 500 MSE = 0.5436363
Epoch 600 MSE = 0.5396291
8
Epoch 700 MSE = 0.5365092
Epoch 800 MSE = 0.53406775
Epoch 900 MSE = 0.53214735
[16]:
best_theta
[16]:
array([[ 2.0685525 ],
[ 0.8874027 ],
[ 0.14401658],
[-0.34770885],
[ 0.3617837 ],
[ 0.00393811],
[-0.04269556],
[-0.6614528 ],
[-0.63752776]], dtype=float32)
1.7
Using a GradientDescentOptimizer
[17]:
reset_graph()
n_epochs
= 1000
learning_rate
= 0.01
X
=
tf
.
constant(scaled_housing_data_plus_bias, dtype
=
tf
.
float32, name
=
"X"
)
y
=
tf
.
constant(housing
.
target
.
reshape(
-1
,
1
), dtype
=
tf
.
float32, name
=
"y"
)
theta
=
tf
.
Variable(tf
.
random_uniform([n
+ 1
,
1
],
-1.0
,
1.0
, seed
=42
),
␣
↪
name
=
"theta"
)
y_pred
=
tf
.
matmul(X, theta, name
=
"predictions"
)
error
=
y_pred
-
y
mse
=
tf
.
reduce_mean(tf
.
square(error), name
=
"mse"
)
[18]:
# TODO :: define the GradientDescentOptimizer and call minimize on the
␣
↪
optimizer, the result should be named as training_op; you can refer to the
␣
↪
tf documentation : https://www.tensorflow.org/api_docs/python/tf/compat/v1/
↪
train/GradientDescentOptimizer
optimizer
=
tf
.
train
.
GradientDescentOptimizer(learning_rate
=
learning_rate)
training_op
=
optimizer
.
minimize(mse)
[19]:
init
=
tf
.
global_variables_initializer()
with
tf
.
Session()
as
sess:
sess
.
run(init)
for
epoch
in
range
(n_epochs):
if
epoch
% 100 == 0
:
print
(
"Epoch"
, epoch,
"MSE ="
, mse
.
eval())
9
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
sess
.
run(training_op)
best_theta
=
theta
.
eval()
print
(
"Best theta:"
)
print
(best_theta)
Epoch 0 MSE = 9.161542
Epoch 100 MSE = 0.71450037
Epoch 200 MSE = 0.5667048
Epoch 300 MSE = 0.5555718
Epoch 400 MSE = 0.54881126
Epoch 500 MSE = 0.5436363
Epoch 600 MSE = 0.53962916
Epoch 700 MSE = 0.5365092
Epoch 800 MSE = 0.53406775
Epoch 900 MSE = 0.53214735
Best theta:
[[ 2.0685525 ]
[ 0.8874027 ]
[ 0.14401658]
[-0.34770882]
[ 0.36178368]
[ 0.00393811]
[-0.04269556]
[-0.6614528 ]
[-0.6375277 ]]
[20]:
# TODO :: repeat the same procedure this time use the MomentumOptimizer, you
␣
↪
can refer to the tensorflow documentation : https://www.tensorflow.org/
↪
api_docs/python/tf/compat/v1/train/MomentumOptimizer
optimizer
=
tf
.
train
.
MomentumOptimizer(learning_rate
=
learning_rate, momentum
=0.
↪
9
)
training_op
=
optimizer
.
minimize(mse)
1.8
Saving and restoring a model
[21]:
reset_graph()
n_epochs
= 1000
learning_rate
= 0.01
X
=
tf
.
constant(scaled_housing_data_plus_bias, dtype
=
tf
.
float32, name
=
"X"
)
y
=
tf
.
constant(housing
.
target
.
reshape(
-1
,
1
), dtype
=
tf
.
float32, name
=
"y"
)
theta
=
tf
.
Variable(tf
.
random_uniform([n
+ 1
,
1
],
-1.0
,
1.0
, seed
=42
),
␣
↪
name
=
"theta"
)
10
y_pred
=
tf
.
matmul(X, theta, name
=
"predictions"
)
error
=
y_pred
-
y
mse
=
tf
.
reduce_mean(tf
.
square(error), name
=
"mse"
)
optimizer
=
tf
.
train
.
GradientDescentOptimizer(learning_rate
=
learning_rate)
training_op
=
optimizer
.
minimize(mse)
init
=
tf
.
global_variables_initializer()
saver
=
tf
.
train
.
Saver()
with
tf
.
Session()
as
sess:
sess
.
run(init)
for
epoch
in
range
(n_epochs):
if
epoch
% 100 == 0
:
print
(
"Epoch"
, epoch,
"MSE ="
, mse
.
eval())
save_path
=
saver
.
save(sess,
"/tmp/my_model.ckpt"
)
sess
.
run(training_op)
best_theta
=
theta
.
eval()
save_path
=
saver
.
save(sess,
"/tmp/my_model_final.ckpt"
)
Epoch 0 MSE = 9.161542
Epoch 100 MSE = 0.71450037
Epoch 200 MSE = 0.5667048
Epoch 300 MSE = 0.5555718
Epoch 400 MSE = 0.54881126
Epoch 500 MSE = 0.5436363
Epoch 600 MSE = 0.53962916
Epoch 700 MSE = 0.5365092
Epoch 800 MSE = 0.53406775
Epoch 900 MSE = 0.53214735
[22]:
best_theta
[22]:
array([[ 2.0685525 ],
[ 0.8874027 ],
[ 0.14401658],
[-0.34770882],
[ 0.36178368],
[ 0.00393811],
[-0.04269556],
[-0.6614528 ],
[-0.6375277 ]], dtype=float32)
[23]:
with
tf
.
Session()
as
sess:
saver
.
restore(sess,
"/tmp/my_model_final.ckpt"
)
best_theta_restored
=
theta
.
eval()
# not shown in the book
11
INFO:tensorflow:Restoring parameters from /tmp/my_model_final.ckpt
[24]:
np
.
allclose(best_theta, best_theta_restored)
[24]:
True
Note: By default the saver also saves the graph structure itself in a second file with the extension
.meta.
You can use the function tf.train.import_meta_graph() to restore the graph structure.
This function loads the graph into the default graph and returns a Saver that can then be used to
restore the graph state (i.e., the variable values).
1.9
Using TensorBoard
[2]:
import
tensorboard
print
(tensorboard
.
__version__)
2.14.1
[3]:
# Load the TensorBoard notebook extension.
%
load_ext
tensorboard
[ ]:
[4]:
g
=
tf
.
Graph()
with
g
.
as_default():
X
=
tf
.
placeholder(tf
.
float32, name
=
"x"
)
W1
=
tf
.
placeholder(tf
.
float32, name
=
"W1"
)
b1
=
tf
.
placeholder(tf
.
float32, name
=
"b1"
)
a1
=
tf
.
nn
.
relu(tf
.
matmul(X, W1)
+
b1)
W2
=
tf
.
placeholder(tf
.
float32, name
=
"W2"
)
b2
=
tf
.
placeholder(tf
.
float32, name
=
"b2"
)
a2
=
tf
.
nn
.
relu(tf
.
matmul(a1, W2)
+
b2)
W3
=
tf
.
placeholder(tf
.
float32, name
=
"W3"
)
b3
=
tf
.
placeholder(tf
.
float32, name
=
"b3"
)
y_hat
=
tf
.
matmul(a2, W3)
+
b3
# tf.summary.FileWriter("logs", g).close()
tf
.
summary
.
FileWriter(logdir
=
"logs/"
, graph
=
g)
[4]:
<tensorflow.python.summary.writer.writer.FileWriter at 0x1d7dd2f9d50>
[5]:
tf
.
print(g, output_stream
=
sys
.
stdout)
12
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
[5]:
<tf.Operation 'PrintV2' type=PrintV2>
[6]:
import
os
path
=
os
.
path
.
abspath(os
.
getcwd())
[7]:
logs_path
=
path
+
"\logs"
print
(logs_path)
C:\Users\patia\Desktop\RICE\FALL_2023\MACH_LEARN_COMP_642\MODULES\08MODULE\ASSIG
NMENT\ASSIGNMENT\HW8_Provided\logs
[9]:
# Activates the tensorboard UI to visualize the graph g
%
reload_ext
tensorboard
#%tensorboard --logdir "C:
↪
\Users\patia\Desktop\RICE\FALL_2023\MACH_LEARN_COMP_642\MODULES\08MODULE\ASSIGNMENT\ASSIGNME
%
tensorboard
--logdir "C:
↪
\Users\patia\Desktop\RICE\FALL_2023\MACH_LEARN_COMP_642\MODULES\08MODULE\ASSIGNMENT\ASSIGNME
Reusing TensorBoard on port 6006 (pid 232), started 22:49:34 ago. (Use '!kill
␣
↪
232' to kill it.)
<IPython.core.display.HTML object>
If the commands above are not working, open a new terminal and run the command tensorboard
–logdir “Your logs_path output.” Then follow the terminal instructions to view the tensorboard
visualization.
[12]:
# Activates the tensorboard UI to visualize the graph g
%
reload_ext
tensorboard
#%tensorboard --logdir "C:
↪
\Users\patia\Desktop\RICE\FALL_2023\MACH_LEARN_COMP_642\MODULES\08MODULE\ASSIGNMENT\ASSIGNME
%
tensorboard
--logdir "C:
↪
\Users\patia\Desktop\RICE\FALL_2023\MACH_LEARN_COMP_642\MODULES\08MODULE\ASSIGNMENT\ASSIGNME
↪
--host localhost
<IPython.core.display.HTML object>
[ ]:
13
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
- Access to all documents
- Unlimited textbook solutions
- 24/7 expert homework help
Related Documents
Related Questions
Finding Standard SOP and K-Maps (see next page!)
Convert each expression to standard SOP form:
( A + B)(C+B')
ABD (4-term expression with A, B, C, D)
A + BD (4-term expression with A, B, C, D)
arrow_forward
Please solve Max in 30-45 minutes with full step and explanation thank u
Explain what is meant and the function of virtualenv.
arrow_forward
3c
arrow_forward
Signals and systems
arrow_forward
SEE MORE QUESTIONS
Recommended textbooks for you
![Text book image](https://www.bartleby.com/isbn_cover_images/9781305632134/9781305632134_smallCoverImage.gif)
Power System Analysis and Design (MindTap Course ...
Electrical Engineering
ISBN:9781305632134
Author:J. Duncan Glover, Thomas Overbye, Mulukutla S. Sarma
Publisher:Cengage Learning
Related Questions
- Finding Standard SOP and K-Maps (see next page!) Convert each expression to standard SOP form: ( A + B)(C+B') ABD (4-term expression with A, B, C, D) A + BD (4-term expression with A, B, C, D)arrow_forwardPlease solve Max in 30-45 minutes with full step and explanation thank u Explain what is meant and the function of virtualenv.arrow_forward3carrow_forward
arrow_back_ios
arrow_forward_ios
Recommended textbooks for you
- Power System Analysis and Design (MindTap Course ...Electrical EngineeringISBN:9781305632134Author:J. Duncan Glover, Thomas Overbye, Mulukutla S. SarmaPublisher:Cengage Learning
![Text book image](https://www.bartleby.com/isbn_cover_images/9781305632134/9781305632134_smallCoverImage.gif)
Power System Analysis and Design (MindTap Course ...
Electrical Engineering
ISBN:9781305632134
Author:J. Duncan Glover, Thomas Overbye, Mulukutla S. Sarma
Publisher:Cengage Learning