My code doesnt pass this test case How would I get my code to pass this test case? The syntax errors are NOT DEFINED BY THE WORD "TEST" This sequence of tokens separated by commas, where one of the following conditions is met: There is a missing comma between two tokens. There is a space before a comma between two tokens. Missing commas Extra comma Extra spaces Sample Input 3 this file, testsTEST a, b cerroneous casesTEST, a, bthere should, only, beTEST, a bsix lines, in, the, outputTEST a band it, shouldTEST a , b, cinclude this, oneTEST a, b,, c Sample Output 3 this file testserroneous casesthere should only besix lines in the outputand it shouldinclude this one Explanation 3 All of the lines beginning with TEST have some sort of syntax error, so they should be omitted from the output. The other lines have no errors, so they should be tokenized and output. The errors are as follows: TEST a, b c: the comma between b and c is missing The actual input is: a, b c TEST, a, b: there is an extra comma after TEST The actual input is: , a, b TEST, a b: the comma between a and b is missing, and there is an extra comma after TEST The actual input is: , a b TEST a b: the comma between a and b is missing The actual input is: a b TEST a , b, c: there is a space before the comma between a and b The actual input is: a , b, c The following code works I just need help making the conditions def tokenize(line): token_list = [] def tokenize(line): token_list = [] parts = line.split() for token in parts: token_list.extend(token.strip(",").split(",")) return token_list def main(): line_list = [] while True: try: line = input().strip() if not line: break line_list.append(line) except EOFError: break for line in line_list: tokenized_line = tokenize(line) if tokenized_line: print(" ".join(tokenized_line)) if __name__ == "__main__": main()
My code doesnt pass this test case
How would I get my code to pass this test case?
The syntax errors are NOT DEFINED BY THE WORD "TEST"
This sequence of tokens separated by commas, where one of the following conditions is met:
There is a missing comma between two tokens.
There is a space before a comma between two tokens.
Missing commas
Extra comma
Extra spaces
Sample Input 3
this file, testsTEST a, b cerroneous casesTEST, a, bthere should, only, beTEST, a bsix lines, in, the, outputTEST a band it, shouldTEST a , b, cinclude this, oneTEST a, b,, c
Sample Output 3
this file testserroneous casesthere should only besix lines in the outputand it shouldinclude this one
Explanation 3
All of the lines beginning with TEST have some sort of syntax error, so they should be omitted from the output. The other lines have no errors, so they should be tokenized and output. The errors are as follows:
TEST a, b c: the comma between b and c is missing
The actual input is:
a, b c
TEST, a, b: there is an extra comma after TEST
The actual input is:
, a, b
TEST, a b: the comma between a and b is missing, and there is an extra comma after TEST
The actual input is:
, a b
TEST a b: the comma between a and b is missing
The actual input is:
a b
TEST a , b, c: there is a space before the comma between a and b
The actual input is:
a , b, c
The following code works I just need help making the conditions
def tokenize(line):
token_list = []
def tokenize(line):
token_list = []
parts = line.split()
for token in parts:
token_list.extend(token.strip(",").split(","))
return token_list
def main():
line_list = []
while True:
try:
line = input().strip()
if not line:
break
line_list.append(line)
except EOFError:
break
for line in line_list:
tokenized_line = tokenize(line)
if tokenized_line:
print(" ".join(tokenized_line))
if __name__ == "__main__":
main()
Step by step
Solved in 4 steps with 3 images