Managing Networks – Trial and Error

I’ve been playing around in my free-time in automating connections between different AWS instances as a way to learn more about networking.  Currently, it’s been pretty fun.  Last post I mentioned a series of libraries just for networking.

This post talks more about user friendly interface in the form of CLI libraries as well as some interesting topics regarding asynchronous processing in Python.  A bit about the CLI libraries that I really like:

Click –  This is a really great library in that it seems almost a natural way of building up trivial CLI in a quick and efficient manner.

You start with instantiating a cli.group, which represents a class to hold all your commands.  You then write functions in python and decorate them with the @cli.command(<help>) python decorator.  Then add @click.arguments(<help>) to add arguments.  The type of arguments available is pretty extensive including the ability to use a file (which it checks if it exists).  The nice thing about this interface is it generates the help menu for you and, if the commands ever get more complex, provides ways to subdivide commands into small groups.  This library is great for centralizing a bunch of commands.  Create a setup.py file with an entry point to make the CLI available anywhere within linux with a custom command prompt (I use something like dbops as a prefix).

Cmd – This allows you to create a command line utility using a single class and defining a few methods.  The Cmd.cmd class provides a shell, which takes in user input and then matches it with a set of commands (if they exist).  Commands are specified with def do_<command name>(self,line): where line is the string that excludes the command name (parse this to get arguments).  To make sure that enter key doesn’t execute the previous command make sure to create a method def emptyline() that returns 0 (return 0 re-prompts the command line for a new command, anything else will stop the loop).  I played around with this command prompt as a front to a network management utility and thought it was pretty effective (Cmd.cmd will run asynchronously, which frees you up to develop other services within the application).  I recommend this if you need to get user input and utilize that within the context of a program.

Argparse – Argparse, (not listed here optiparse) are other options that you can also use.  It works by providing a set of rules to handle arguments for a specific command and then assigns those to variables globally.  Good part of argparse is the argument section is very flexible and you can add things like flags.  I think overall it’s a bit harder to implement then the above two cases (but more flexible).  I think this is used mostly with a single file.

Sys/Os – The system and os library is well worth getting to know.  It provides a great way to interact with the operating system.  From checking on files, directories to … doing a stat on a file to …  One of the great uses of Sys and Os commands is the ability to manipulate stdin, arguments and stdout.  I’ve used this to generate python scripts that accept piped results.  Another interesting library to check on for this specifically is subprocess model, which allows you to run commands in the background and provides file like objects for stdin, stdout and stderr (with subprocess.PIPE allowing you to pipe results between subprocesses).

The parallel processing part of my project was pretty cool.  I worked mostly with multiprocessing and the threading library.  Multiprocessing allows you to produce new processes via fork, threading allows for shared memory between processes.

multiprocessing – I really like this library.  You can create a set of workers and provide them a function to do work in parallel.  The join command (similar to bash) waits until they are all done and then continues the process.  The overall command is pretty easy to pick up, you create a multiprocessing process, provide a target function and a set of arguments for the function (typically in list format).   You than just use the start method on the project and it begins to run it in the background.  Other cool things about multiprocessing is the ability to set up queues, pipes (bi-directional communication) and proxy shared-memory manager for dictionaries and lists (didn’t get to work, but see docs).  One thing I did run into is working around shared memory issues (initial fault in not researching threading vs multiprocessing).

threading – The commands are similar to multiprocessing (in terms of setting up), but runs things in a thread instead of a process.  You’ll see threading used a lot in libraries.  TCPServer in the previous post (SocketServer library) uses it in it’s mixing.

Celery – I didn’t get into celery as much as I’d like to.  Mostly due to not wanting to set up rabbitmq or redis for a tiny application (I used sqlite to keep foot print small and set up easy).  It’s still a great tool to look more into as it runs a queue (or set of queues) for you and allows you to execute things asynchronously (use for messaging too).  I will probably look more into this library and the associated products in the future.

The application I developed was a tool for managing database connections.  It was split into 3 parts, a process that polls AWS for connection information, a database for storing that information (sqlite) and a process that managed SQL connections for me (through port forwarding).  This was all controlled via CLI based on Cmd library.  Messages were sent to polling and SQL connection manager via queues (multiprocessing) with each process run within a separate process (multiprocessing).  Within SQL connection manager, I created TCPServer (SocketServer), which I ran in a different thread and added to a class to manage connections.  The threading was done partially to isolate failures due to a computer shutting down or refusing a connection.  This prevents the entire application from failing due to the actions of a single TCPServer.  Overall, I’ve liked the experiment so far, but don’t intend to do much more with it.  It was a experiment to test out a lot of these libraries and get a deeper understanding about things like ssh.

Leave a Reply

Your email address will not be published. Required fields are marked *