Feedback about the course

I feel a bit conflicted about this course.

I spent most of my time looking a Python docs (because I rarely use it) and API docs.
Yes, I understood a little bit more about how those agents work and how the local execution environment works internally, but that was relatively trivial stuff.
Yes, this is a fun little exercise, but it’s only minimally about AI and AI agents, it’s 99% about writing glue code. I was a bit disappointed by that, but I still had fun and learned something.

I could think of two ideas to improve and expand the course:

  • For people who need to understand security implications of LLMs
    • show an example where an LLM might screw things up badly if the environment was real
    • build security measures to prevent the worst things from happening while making clear, that agents are inherently insecure if they can do significant things, especially if the have access to the web
  • For tinkerers and self hosters
    • apply the course to a local LLM / VM / VPS so people can play with their own things if they want to
2 Likes

Yep these are just base stages :slight_smile: We add “extensions” over time to cover more ground in challenges (you can see 10s of these on shell/redis for example).

That said, this challenge is about writing code. It’s not the most efficient way to learn how to use something like Claude code.

Would a “sandboxing” extension be something that you’re interested in? (Since you mentioned security implications)

1 Like

Yes, as a third step, absolutely! I’m a bit confused by that question though, because from a security perspective the next step would be to limit bad behavior right at the execution level, when the LLM requests to “rm -rf /” or if it tries to read the password file etc. and after that I’d look at sandboxing as an additional measure. Am I coming at this from the wrong angle? I mean sandboxing is not a panacea either, so anything bad that can happen should be contained as early as possible.

Ah, yep - that’s on the list too (Claude Code’s solution for this is “Command Permissions”):

You can vote on these ideas here btw: The Software Pro's Best Kept Secret.. Haven’t added sandboxing yet but will do if we get more people mentioning it!

1 Like

Awesome, thanks!

Happy to see someone else was uncomfortable about this phase. What I did for my Bash tool was to only allow non-recursive rm and ls and only on the project directory.

1 Like