This is the final stage. I’ve tried executing the pipeline directly using os.system, but somehow the expected output doesn’t match, especially in the case of tail -f. I even tried replacing it with tail and cat, the results are different, but the output still doesn’t match.
Hey @imxade, I tried running your code against the previous stages, but it’s actually no longer passing a previous stage #BR6 (Pipelines - Dual-command pipeline).
You are clearly well on your way with the challenges. And if that is your only goal, that is great.
But if you want to mimic real life shell behavior then your shell is currently doing some stuff in a way that breaks normal expectations:
Caching in allCmds: it looks like this caches all executables. Caching paths is bad in a shell; what happens then when I run this as two separate commands. Will it find the new executable?
gcc main.c
./main
It looks like you run a pipeline in sequence rather than parallel. This requires the entire output to be stored (as you do in files .stdout.log) so that it can be passed to the next stage. A potentially more severe issue, is that while this is happening there is no interactivity to the user. The user will not see any output until the last stage runs. And what about long-running or forever-running processes at the start of the pipeline? That data will never reach the end. Note: you do get a long way on this since you “merge” non-built-ins and let the shell handle the parallelization. But it looks like using a mix of built-ins and executables would break this.
You are doing all this effort to parse the command line from the user. But then you use system function and you have to essentially recombine it again. Tongue-in-cheek: why not just pass the line directly to system then? But now you have re-encode the proper formatting for the system shell (which might vary!), including redirection syntax etc.. You are essentially implementing “your shell”, by parsing the command line, and then telling the system shell to do this work.
Since you are using Python if you want to move to an approach with less brittle re-encoding, you might want to look at subproces.Popen. It also makes piping trivial.
I think I broke something later on, as I complete 42/43 before that, will look into it.
Also, only built-in commands are split, so a command like “tail <> | head <>” should be executed as-is without splitting. However, it’s still was failing for some reason.
Hi, so I’m having the same issue but in Rust- multi-command pipelines fail to output last stage
honestly, I’m a bit confused and stuck. I’m trying to implement multi-command pipelines in my shell (e.g. ls | tail | head | grep ...). Some pipelines work, but when I run a pipeline like ls /tmp/quz | tail -n 5 | head -n 3 | grep "f-79" or soemthing like this, I just get the "$" and no output, even though the file exists and should match.
Here’s what I’ve tried so far:
I’m using nix::unistd::pipe() to create pipes and std::fs::File::from_raw_fd to wrap the fds.
For each segment, I set up the pipes, spawn the child, and drop the write end in the parent.
After the loop, I drop the last read end and wait for all children.
I flush stdout and stderr after waiting.
But it’s still not working for these multi-stage pipelines. I feel like I’m missing something with closing the pipes or waiting for the last process, but I can’t figure out what.
Here’s the relevant snippet:
for (i, seg) in segments.iter().enumerate() {
let cmd = seg[0];
let args = &seg[1..];
let is_first = i == 0;
let is_last = i == n-1;
let is_bi = is_builtin(cmd);
let mut next_reader = None;
let mut next_writer = None;
if !is_last {
let (r, w) = pipe().expect("pipe failed");
next_reader = Some(unsafe { std::fs::File::from_raw_fd(r.into_raw_fd()) });
next_writer = Some(unsafe { std::fs::File::from_raw_fd(w.into_raw_fd()) });
}
if is_bi {
// ... built-in logic ...
let mut input_buf = Vec::new();
if let Some(ref mut file) = prev_reader {
file.read_to_end(&mut input_buf).ok();
}
let mut output_buf = Vec::new();
run_builtin(cmd, args, Some(&mut &input_buf[..]), Some(&mut output_buf));
if let Some(mut w) = next_writer {
w.write_all(&output_buf).ok();
drop(w);
} else {
print!("{}", String::from_utf8_lossy(&output_buf));
io::stdout().flush().ok();
}
} else {
let mut command = Command::new(cmd);
command.args(args);
if let Some(ref mut file) = prev_reader {
command.stdin(Stdio::from(file.try_clone().unwrap()));
} else if is_first {
command.stdin(Stdio::inherit());
}
if !is_last {
if let Some(ref w) = next_writer {
command.stdout(Stdio::from(w.try_clone().unwrap()));
}
}
let child = command.spawn().expect("failed to spawn");
if let Some(w) = next_writer { drop(w); }
children.push(child);
}
if let Some(old_reader) = prev_reader.take() { drop(old_reader); }
prev_reader = next_reader;
}
// Close the last read end if it exists
if let Some(last_reader) = prev_reader.take() { drop(last_reader); }
for mut child in children { let _ = child.wait(); }
io::stdout().flush().ok();
io::stderr().flush().ok();
I’ve tried dropping/closing all the pipe ends, waiting for all, and flushing, but it’s still not happening. Any ideas what I’m missing? Thanks for any help!
@flurry101
Note that, instead of waiting to receive all the data before displaying it (as is the case with blocking commands like tail -f or head -n), pipes are expected to stream data continuously until the previous commands in the pipeline stop streaming
I handled this by setting stdout to None for the last command in the pipeline
I even tried pre-calculating the file size to avoid the $, although it worked locally, but it was a bit slow and gave same result on testcases