RNN function

Computer Networking: A Top-Down Approach (7th Edition)
7th Edition
ISBN:9780133594140
Author:James Kurose, Keith Ross
Publisher:James Kurose, Keith Ross
Chapter1: Computer Networks And The Internet
Section: Chapter Questions
Problem R1RQ: What is the difference between a host and an end system? List several different types of end...
icon
Related questions
Question
RNN function with code complete question is given
Unlike RNNS that recurrently process tokens of a sequence one by one, self-attention ditches se-
quential operations in favor of parallel computation. To use the sequence order information, we
can inject absolute or relative positional information by adding positional encoding to the input rep-
resentations. Positional encodings can be either learned or fixed. In the following, we describe a
fixed positional encoding based on sine and cosine functions (Vaswani et al., 2017).
Suppose that the input representation X e Rnxd contains the d-dimensional embeddings for n
tokens of a sequence. The positional encoding outputs X+Pusing a positional embedding matrix
PE R"xd of the same shape, whose element on the ith row and the (2j)th or the (2j + 1)th column
is
Pi,2j = sin
100002j/d
(10.6.2)
i
Pi,2j+1 = cos
1000025/d
At first glance, this trigonometric-function design looks weird. Before explanations of this design,
let us first implement it in the following PositionalEncoding class.
Transcribed Image Text:Unlike RNNS that recurrently process tokens of a sequence one by one, self-attention ditches se- quential operations in favor of parallel computation. To use the sequence order information, we can inject absolute or relative positional information by adding positional encoding to the input rep- resentations. Positional encodings can be either learned or fixed. In the following, we describe a fixed positional encoding based on sine and cosine functions (Vaswani et al., 2017). Suppose that the input representation X e Rnxd contains the d-dimensional embeddings for n tokens of a sequence. The positional encoding outputs X+Pusing a positional embedding matrix PE R"xd of the same shape, whose element on the ith row and the (2j)th or the (2j + 1)th column is Pi,2j = sin 100002j/d (10.6.2) i Pi,2j+1 = cos 1000025/d At first glance, this trigonometric-function design looks weird. Before explanations of this design, let us first implement it in the following PositionalEncoding class.
Expert Solution
steps

Step by step

Solved in 2 steps with 1 images

Blurred answer
Recommended textbooks for you
Computer Networking: A Top-Down Approach (7th Edi…
Computer Networking: A Top-Down Approach (7th Edi…
Computer Engineering
ISBN:
9780133594140
Author:
James Kurose, Keith Ross
Publisher:
PEARSON
Computer Organization and Design MIPS Edition, Fi…
Computer Organization and Design MIPS Edition, Fi…
Computer Engineering
ISBN:
9780124077263
Author:
David A. Patterson, John L. Hennessy
Publisher:
Elsevier Science
Network+ Guide to Networks (MindTap Course List)
Network+ Guide to Networks (MindTap Course List)
Computer Engineering
ISBN:
9781337569330
Author:
Jill West, Tamara Dean, Jean Andrews
Publisher:
Cengage Learning
Concepts of Database Management
Concepts of Database Management
Computer Engineering
ISBN:
9781337093422
Author:
Joy L. Starks, Philip J. Pratt, Mary Z. Last
Publisher:
Cengage Learning
Prelude to Programming
Prelude to Programming
Computer Engineering
ISBN:
9780133750423
Author:
VENIT, Stewart
Publisher:
Pearson Education
Sc Business Data Communications and Networking, T…
Sc Business Data Communications and Networking, T…
Computer Engineering
ISBN:
9781119368830
Author:
FITZGERALD
Publisher:
WILEY