Tag Archives: linux

Ansible Part 4: Putting it All Together

Roles are the most complicated and yet simplest aspect of Ansible to learn.

I’ve mentioned before that Ansible’s ad-hoc mode often is overlooked as just a way to learn how to use Ansible. I couldn’t disagree with that mentality any more fervently than I already do. Ad-hoc mode is actually what I tend to use most often on a day-to-day basis. That said, using playbooks and roles are very powerful ways to utilize Ansible’s abilities. In fact, when most people think of Ansible, they tend to think of the roles feature, because it’s the way most Ansible code is shared. So first, it’s important to understand the relationship between ad-hoc mode, playbooks and roles.

Ad-hoc Mode

This is a bit of a review, but it’s easy to forget once you start creating playbooks. Ad-hoc mode is simply a one-liner that uses an Ansible module to accomplish a given task on a set of computers. Something like:


ansible cadlab -b -m yum -a "name=vim state=latest"

will install vim on every computer in the cadlab group. The -b signals to elevate privilege (“become” root), the -m means to use the yum module, and the -a says what actions to take. In this case, it’s installing the latest version of vim.

Usually when I use ad-hoc mode to install packages, I’ll follow up with something like this:


ansible cadlab -b -m service -a "name=httpd state=started
 ↪enabled=yes"

That one-liner will make sure that the httpd service is running and set to start on boot automatically (the latter is what “enabled” means). Like I said at the beginning, I most often use Ansible’s ad-hoc mode on a day-to-day basis. When a new rollout or upgrade needs to happen though, that’s when it makes sense to create a playbook, which is a text file that contains a bunch of Ansible commands.

Playbook Mode

I described playbooks in my last article. They are YAML- (Yet Another Markup Language) formatted text files that contain a list of things for Ansible to accomplish. For example, to install Apache on a lab full of computers, you’d create a file something like this:


---

- hosts: cadlab
  tasks:
  - name: install apache2 on CentOS
    yum: name=httpd state=latest
    notify: start httpd
    ignore_errors: yes

  - name: install apache2 on Ubuntu
    apt: update_cache=yes name=apache2 state=latest
    notify: start apache2
    ignore_errors: yes

  handlers:
  - name: start httpd
    service: name=httpd enable=yes state=started

  - name: start apache2
    service: name=apache2 enable=yes state=started

Mind you, this isn’t the most elegant playbook. It contains a single play that tries to install httpd with yum and apache2 with apt. If the lab is a mix of CentOS and Ubuntu machines, one or the other of the installation methods will fail. That’s why the ignore_errors command is in each task. Otherwise, Ansible would quit when it encountered an error. Again, this method works, but it’s not pretty. It would be much better to create conditional statements that would allow for a graceful exit on incompatible platforms. In fact, playbooks that are more complex and do more things tend to evolve into a “role” in Ansible.

Roles

Roles aren’t really a mode of operation. Actually, roles are an integral part of playbooks. Just like a playbook can have tasks, variables and handlers, they can also have roles. Quite simply, roles are just a way to organize the various components referenced in playbooks. It starts with a folder layout:


roles/
  webserver/
    tasks/
      main.yml
    handlers/
      main.yml
    vars/
      main.yml
    templates/
      index.html.j2
      httpd.conf.j2
    files/
      ntp.conf

Ansible looks for a roles folder in the current directory, but also in a system-wide location like /etc/ansible/roles, so you can store your roles to keep them organized and out of your home folder. The advantage of using roles is that your playbooks can look as simple as this:


---

- hosts: cadlab
  roles:
    - webserver

And then the “webserver” role will be applied to the group “cadlab” without needing to type any more information inside your playbook. When a role is specified, Ansible looks for a folder matching the name “webserver” inside your roles folder (in the current directory or the system-wide directory). It then will execute the tasks inside webserver/tasks/main.yml. Any handlers mentioned in that playbook will be searched for automatically in webserver/handlers/main.yml. Also, any time files are referenced by a template module or file/copy module, the path doesn’t need to be specified. Ansible automatically will look inside webserver/files/ or /webserver/templates/ for the files.

Basically, using roles will save you lots of path declarations and include statements. That might seem like a simple thing, but the organization creates a standard that not only makes it easy to figure out what a role does, but also makes it easy to share your code with others. If you always know any files must be stored in roles/rolename/files/, it means you can share a “role” with others and they’ll know exactly what to do with it—namely, just plop it in their own roles folder and start using it.

Sharing Roles: Ansible Galaxy

One of the best aspects of current DevOps tools like Chef, Puppet and Ansible is that there is a community of people willing to share their hard work. On a small scale, roles are a great way to share with your coworkers, especially if you have roles that are customized specifically for your environment. Since many of environments are similar, roles can be shared with an even wider audience—and that’s where Ansible Galaxy comes into play.

I’ll be honest, part of the draw for me with Ansible is the sci-fi theme in the naming convention. I know I’m a bit silly in that regard, but just naming something Ansible or Ansible Galaxy gets my attention. This might be one of those “built by nerds, for nerds” sort of things. I’m completely okay with that. If you head over to the Galaxy site, you’ll find the online repository for shared roles—and there are a ton.

For simply downloading and using other people’s roles, you don’t need any sort of account on Ansible Galaxy. You can search on the website by going to Galaxy and clicking “Browse Roles” on the left side of the page (Figure 1). There are more than 13,000 roles currently uploaded to Ansible Galaxy, so I highly recommend taking advantage of the search feature! In Figure 2, you’ll see I’ve searched for “apache” and sorted by “downloads” in order to find the most popular roles.

Figure 1. Click that link to browse and search for roles.

Figure 2. Jeff Geerling’s roles are always worth checking out.

Many of the standard roles you’ll find that are very popular are written by Jeff Geerling, whose user name is geerlingguy. He’s an Ansible developer who has written at least one Ansible book that I’ve read and possibly others. He shares his roles, and I encourage you to check them out—not only for using them, but also for seeing how he codes around issues like conditionally choosing the correct module for a given distribution and things like that. You can click on the role name and see all the code involved. You might notice that if you want to examine the code, you need to click on the GitHub link. That’s one of the genius moves of Ansible Galaxy—all roles are stored on a user’s GitHub page as opposed to an Ansible Galaxy server. Since most developers keep their code on GitHub, they don’t need to remember to upload to Ansible Galaxy as well.

Incidentally, if you ever desire to share your own Ansible roles, you’ll need to use a GitHub user name to upload them, because again, roles are all stored on GitHub. But that’s getting ahead of things; first you need to learn how to use roles in your environment.

Using ansible-galaxy to Install Roles

It’s certainly possible to download an entire repository and then unzip the contents into your roles folder. Since they’re just text files and structured folders, there’s not really anything wrong with doing it that way. It’s just far less convenient than using the tools built in to Ansible.

There is a search mechanism on the Ansible command line for searching the Ansible Galaxy site, but in order to find a role I want to use, I generally go to the website and find it, then use the command-line tools to download and install it. Here’s an example of Jeff Geerling’s “apache” role. In order to use Ansible to download a role, you need to do this:


sudo ansible-galaxy install geerlingguy.apache

Notice two things. First, you need to execute this command with root privilege. That’s because the ansible-galaxy command will install roles in your system-wide roles folder, which isn’t writable (by default) by your regular user account. Second, take note of the format of roles named on Ansible Galaxy. The format is username.rolename, so in this case, geerlingguy.apache, which is also how you reference the role inside your playbooks.

If you want to see roles listed with the correct format, you can use ansible-galaxy‘s search command, but like I said, I find it less than useful because it doesn’t sort by popularity. In fact, I can’t figure out what it sorts by at all. The only time I use the command-line search feature is if I also use grep to narrow down roles by a single person. Anyway, Figure 3 shows what the results of ansible-galaxy search look like. Notice the username.rolename format.

Figure 3. I love the command line, but these search results are frustrating.

Once you install a role, it is immediately available for you to use in your own playbooks, because it’s installed in the system-wide roles folder. In my case, that’s /etc/ansible/roles (Figure 4). So now, if I create a playbook like this:


---
- hosts: cadlab
  roles:
    - geerlingguy.apache

Apache will be installed on all my cadlab computers, regardless of what distribution they’re using. If you want to see how the role (which is just a bunch of tasks, handlers and so forth) works, just pick through the folder structure inside /etc/ansible/roles/geerlingguy.apache/. It’s all right there for you to use or modify.

Figure 4. Easy Peasy, Lemon Squeezy

Creating Your Own Roles

There’s really no magic here, since you easily can create a roles folder and then create your own roles manually inside it, but ansible-galaxy does give you a shortcut by creating a skeleton role for you. Make sure you have a roles folder, then just type:


ansible-galaxy init roles/rolename

and you’ll end up with a nicely created folder structure for your new role. It doesn’t do anything magical, but as someone who has misspelled “Templates” before, I can tell you it will save you a lot of frustration if you have clumsy fingers like me.

Sharing Your Roles

If you get to the point where you want to share you roles on Ansible Galaxy, it’s fairly easy to do. Make sure you have your role on GitHub (using git is beyond the scope of this article, but using git and GitHub is a great way to keep track of your code anyway). Once you have your roles on GitHub, you can use ansible-galaxy to “import” them into the publicly searchable Ansible Galaxy site. You first need to authenticate:


ansible-galaxy login

Before you try to log in with the command-line tool, be sure you’ve visited the Ansible Galaxy website and logged in with your GitHub account. You can see in Figure 5 that at first I was unable to log in. Then I logged in on the website, and after that, I was able to log in with the command-line tool successfully.

Figure 5. It drove me nuts trying to figure out why I couldn’t authenticate.

Once you’re logged in, you can add your role by typing:


ansible-galaxy import githubusername githubreponame

The process takes a while, so you can add the -no-wait option if you want, and the role will be imported in the background. I really don’t recommend doing this until you have created roles worth sharing. Keep in mind, there are more than 13,000 roles on Ansible Galaxy, so there are many “re-inventions of the wheel” happening.

From Here?

Well, it’s taken me four articles, but I think if you’ve been following along, you should be to the point where you can take it from here. Playbooks and roles are usually where people focus their attention in Ansible, but I also encourage you to take advantage of ad-hoc mode for day-to-day maintenance tasks. Ansible in some ways is just another DevOps configuration management tool, but for me, it feels the most like the traditional problem-solving solution that I used Bash scripts to accomplish for decades. Perhaps I just like Ansible because it thinks the same way I do. Regardless of your motivation, I encourage you to learn Ansible enough so you can determine whether it fits into your workflow as well as it fits into mine.

If you’d like more direct training on Ansible (and other stuff) from yours truly, visit me at my DayJob as a trainer for CBT Nuggets. You can get a full week free if you head over to https://cbt.gg/shawnp0wers and sign up for a trial!

The 4 Part Series on Ansible includes:
Part 1 – DevOps for the Non-Dev
Part 2 – Making Things Happen
Part 3 – Playbooks
Part 4 – Putting it All Together

Ansible Part 3: Playbooks

Playbooks make Ansible even more powerful than before.

To be quite honest, if Ansible had nothing but its ad-hoc mode, it still would be a powerful and useful tool for automating large numbers of computers. In fact, if it weren’t for a few features, I might consider sticking with ad-hoc mode and adding a bunch of those ad-hoc commands to a Bash script and be done with learning. Those few additional features, however, make the continued effort well worth it.

Tame the Beast with YAML

Ansible goes out of its way to use an easy-to-read configuration file for making “playbooks”, which are files full of separate Ansible “tasks”. A task is basically an ad-hoc command written out in a configuration file that makes it more organized and easy to expand. The configuration files use YAML, which stands for “Yet Another Markup Language”. It’s an easy-to-read markup language, but it does rely on whitespace, which isn’t terribly common with most config files. A simple playbook looks something like this:


---

- hosts: webservers
  become: yes
  tasks:
    - name: this installs a package
      apt: name=apache2 update_cache=yes state=latest

    - name: this restarts the apache service
      service: name=apache2 enabled=yes state=restarted

The contents should be fairly easy to identify. It’s basically two ad-hoc commands broken up into a YAML configuration file. There are a few important things to notice. First, every filename ends with .yaml, and every YAML file must begin with three hyphen characters. Also, as mentioned above, whitespace matters. Finally, when a hyphen should precede a section and when it should just be spaced appropriately often is confusing. Basically every new section needs to start with a – symbol, but it’s often hard to tell what should be its own section. Nevertheless, it starts to feel natural as you create more and more playbooks.

The above playbook would be executed by typing:


ansible-playbook filename.yaml

And that is the equivalent of these two commands:


ansible webservers -b -m apt -a "name=apache2
 ↪update_cache=yes state=latest"
ansible webservers -b -m service -a "name=apache2
 ↪enabled=yes state=restarted"

Handling Your Handlers

But a bit of organization is really only the beginning of why playbooks are so powerful. First off, there’s the idea of “Handlers”, which are tasks that are executed only when “notified” that a task has made a change. How does that work exactly? Let’s rewrite the above YAML file to make the second task a handler:


---

- hosts: webservers
  become: yes
  tasks:
    - name: this installs a package
      apt: name=apache2 update_cache=yes state=latest
      notify: enable apache

  handlers:
    - name: enable apache
      service: name=apache2 enabled=yes state=started

On the surface, this looks very similar to just executing multiple tasks. When the first task (installing Apache) executes, if a change is made, it notifies the “enable apache” handler, which makes sure Apache is enabled on boot and currently running. The significance is that if Apache is already installed, and no changes are made, the handler never is called. That makes the code much more efficient, but it also means no unnecessary interruption of the already running Apache process.

There are other subtle time-saving issues with handlers too—for example, multiple tasks can call a handler, but it executes only a single time regardless of how many times it’s called. But the really significant thing to remember is that handlers are executed (notified) only when an Ansible task makes a change on the remote system.

Just the Facts, Ma’am

Variable substitution works quite simply inside a playbook. Here’s a simple example:


---

- hosts: webservers
  become: yes
  vars:
    package_name: apache2
  tasks:
    - name: this installs a package
      apt: "name={{ package_name }} update_cache=yes state=latest"
      notify: enable apache

  handlers:
    - name: enable apache
      service: "name={{ package_name }} enabled=yes state=started"

It should be fairly easy to understand what’s happening above. Note that I did put the entire module action section in quotes. It’s not always required, but sometimes Ansible is funny about unquoted variable substitutions, so I always try to put things in quotes when variables are involved.

The really interesting thing about variables, however, are the “Gathered Facts” about every host. You might notice when executing a playbook that the first thing Ansible does is “Gathering Facts…”, which completes without error, but doesn’t actually seem to do anything. What’s really happening is that system information is getting populated into variables that can be used inside a playbook. To see the entire list of “Gathered Facts”, you can execute an ad-hoc command:


ansible webservers -m setup

You’ll get a huge list of facts generated from the individual hosts. Some of them are particularly useful. For example, ansible_os_family will return something like “RedHat” or “Debian” depending on which distribution you’re using. Ubuntu and Debian systems both return “Debian”, while Red Hat and CentOS will return “RedHat”. Although that’s certainly interesting information, it’s really useful when different distros use different tools—for example, apt vs. yum.

Getting Verbose

One of the frustrations of moving from Ansible ad-hoc commands to playbooks is that in playbook mode, Ansible tends to keep fairly quiet with regard to output. With ad-hoc mode, you often can see what is going on, but with a playbook, you know only if it finished okay, and if a change was made. There are two easy ways to change that. The first is just to add the -v flag when executing ansible-playbook. That adds verbosity and provides lots of feedback when things are executed. Unfortunately, it often gives so much information, that usefulness gets lost in the mix. Still, in a pinch, just adding the -v flag helps.

If you’re creating a playbook and want to be notified of things along the way, the debug module is really your friend. In ad-hoc mode, the debug module doesn’t make much sense to use, but in a playbook, it can act as a “reporting” tool about what is going on. For example:


---

- hosts: webservers
  tasks:
   - name: describe hosts
     debug: msg="Computer {{ ansible_hostname }} is running
      ↪{{ ansible_os_family }} or equivalent"

The above will show you something like Figure 1, which is incredibly useful when you’re trying to figure out the sort of systems you’re using. The nice thing about the debug module is that it can display anything you want, so if a value changes, you can have it displayed on the screen so you can troubleshoot a playbook that isn’t working like you expect it to work. It is important to note that the debug module doesn’t do anything other than display information on the screen for you. It’s not a logging system; rather, it’s just a way to have information (customized information, unlike the verbose flag) displayed during execution. Still, it can be invaluable as your playbooks become more complex.

Figure 1. Debug mode is the best way to get some information on what’s happening inside your playbooks.

If This Then That

Conditionals are a part of pretty much every programming language. Ansible YAML files also can take advantage of conditional execution, but the format is a little wacky. Normally the condition comes first, and then if it evaluates as true, the following code executes. With Ansible, it’s a little backward. The task is completely spelled out, then a when statement is added at the end. It makes the code very readable, but as someone who’s been using if/then mentality his entire career, it feels funny. Here’s a slightly more complicated playbook. See if you can parse out what would happen in an environment with both Debian/Ubuntu and Red Hat/CentOS systems:


---

- hosts: webservers
  become: yes
  tasks:
    - name: install apache this way
      apt: name=apache2 update_cache=yes state=latest
      notify: start apache2
      when: ansible_os_family == "Debian"

    - name: install apache that way
      yum: name=httpd state=latest
      notify: start httpd
      when: ansible_os_family == "RedHat"

  handlers:
    - name: start apache2
      service: name=apache2 enabled=yes state=started

    - name: start httpd
      service: name=httpd enabled=yes state=started

Hopefully the YAML format makes that fairly easy to read. Basically, it’s a playbook that will install Apache on hosts using either yum or apt based on which type of distro they have installed. Then handlers make sure the newly installed packages are enabled and running.

It’s easy to see how useful a combination of gathered facts and conditional statements can be. Thankfully, Ansible doesn’t stop there. As with other configuration management systems, it includes most features of programming and scripting languages. For example, there are loops.

Play It Again, Sam

If there is one thing Ansible does well, it’s loops. Quite frankly, it supports so many different sorts of loops, I can’t cover them all here. The best way to figure out the perfect sort of loop for your situation is to read the Ansible documentation directly.

For simple lists, playbooks use a familiar, easy-to-read method for doing multiple tasks. For example:


---

- hosts: webservers
  become: yes

  tasks:
    - name: install a bunch of stuff
      apt: "name={{ item }} state=latest update_cache=yes"
      with_items:
        - apache2
        - vim
        - chromium-browser

This simple playbook will install multiple packages using the apt module. Note the special variable named item, which is replaced with the items one at a time in the with_items section. Again, this is pretty easy to understand and utilize in your own playbooks. Other loops work in similar ways, but they’re formatted differently. Just check out the documentation for the wide variety of ways Ansible can repeat similar tasks.

Templates

One last module I find myself using often is the template module. If you’ve ever used mail merge in a word processor, templating works similarly. Basically, you create a text file and then use variable substitution to create a custom version on the fly. I most often do this for creating HTML files or config files. Ansible uses the Jinja2 templating language, which is conveniently similar to standard variable substitution in playbooks themselves. The example I almost always use is a custom HTML file that can be installed on a remote batch of web servers. Let’s look at a fairly complex playbook and an accompanying HTML template file.

Here’s the playbook:


---

- hosts: webservers
  become: yes

  tasks:
   - name: install apache2
     apt: name=apache2 state=latest update_cache=yes
     when: ansible_os_family == "Debian"

   - name: install httpd
     yum: name=httpd state=latest
     when: ansible_os_family == "RedHat"

   - name: start apache2
     service: name=apache2 state=started enable=yes
     when: ansible_os_family == "Debian"

   - name: start httpd
     service: name=httpd state=started enable=yes
     when: ansible_os_family == "RedHat

   - name: install index
     template:
       src: index.html.j2
       dest: /var/www/html/index.html

Here’s the template file, which must end in .j2 (it’s the file referenced in the last task above):


<html><center>
<h1>This computer is running {{ ansible_os_family }},
and its hostname is:</h1>
<h3>{{ ansible_hostname }}</h3>
{# this is a comment, which won't be copied to the index.html file #}
</center></html>

This also should be fairly easy to understand. The playbook takes a few different things it learned and installs Apache on the remote systems, regardless of whether they are Red Hat- or Debian-based. Then, it starts the web servers and makes sure the web server starts on system boot. Finally, the playbook takes the template file, index.html.j2, and substitutes the variables while copying the file to the remote system. Note the {# #} format for making comments. Those comments are completely erased on the remote system and are visible only in the .j2 file on the Ansible machine.

The Sky Is the Limit!

I’ll finish up this series in my next article, where I plan to cover how to build on your playbook knowledge to create entire roles and take advantage of the community contributions available. Ansible is a very powerful tool that is surprisingly simple to understand and use. If you’ve been experimenting with ad-hoc commands, I encourage you to create playbooks that will allow you to do multiple tasks on a multitude of computers with minimal effort. At the very least, play around with the “Facts” gathered by the ansible-playbook app, because those are things unavailable to the ad-hoc mode of Ansible. Until next time, learn, experiment, play and have fun!

If you’d like more direct training on Ansible (and other stuff) from yours truly, visit me at my DayJob as a trainer for CBT Nuggets. You can get a full week free if you head over to https://cbt.gg/shawnp0wers and sign up for a trial!

The 4 Part Series on Ansible includes:
Part 1 – DevOps for the Non-Dev
Part 2 – Making Things Happen
Part 3 – Playbooks
Part 4 – Putting it All Together

Ansible Part 2: Making Things Happen

Finally, an automation framework that thinks like a sysadmin. Ansible, you’re hired.

In my last article, I described how to configure your server and clients so you could connect to each client from the server. Ansible is a push-based automation tool, so the connection is initiated from your “server”, which is usually just a workstation or a server you ssh in to from your workstation. In this article, I explain how modules work and how you can use Ansible in ad-hoc mode from the command line.

Ansible is supposed to make your job easier, so the first thing you need to learn is how to do familiar tasks. For most sysadmins, that means some simple command-line work. Ansible has a few quirks when it comes to command-line utilities, but it’s worth learning the nuances, because it makes for a powerful system.

Command Module

This is the safest module to execute remote commands on the client machine. As with most Ansible modules, it requires Python to be installed on the client, but that’s it. When Ansible executes commands using the Command Module, it does not process those commands through the user’s shell. This means some variables like $HOME are not available. It also means stream functions (redirects, pipes) don’t work. If you don’t need to redirect output or to reference the user’s home directory as a shell variable, the Command Module is what you want to use. To invoke the Command Module in ad-hoc mode, do something like this:


ansible host_or_groupname -m command -a "whoami"

Your output should show SUCCESS for each host referenced and then return the user name that the user used to log in. You’ll notice that the user is not root, unless that’s the user you used to connect to the client computer.

If you want to see the elevated user, you’ll add another argument to the ansible command. You can add -b in order to “become” the elevated user (or the sudo user). So, if you were to run the same command as above with a “-b” flag:


ansible host_or_groupname -b -m command -a "whoami"

you should see a similar result, but the whoami results should say root instead of the user you used to connect. That flag is important to use, especially if you try to run remote commands that require root access!

Shell Module

There’s nothing wrong with using the Shell Module to execute remote commands. It’s just important to know that since it uses the remote user’s environment, if there’s something goofy with the user’s account, it might cause problems that the Command Module avoids. If you use the Shell Module, however, you’re able to use redirects and pipes. You can use the whoami example to see the difference. This command:


ansible host_or_groupname -m command -a "whoami > myname.txt"

should result in an error about > not being a valid argument. Since the Command Module doesn’t run inside any shell, it interprets the greater-than character as something you’re trying to pass to the whoami command. If you use the Shell Module, however, you have no problems:


ansible host_or_groupname -m shell -a "whom > myname.txt"

This should execute and give you a SUCCESS message for each host, but there should be nothing returned as output. On the remote machine, however, there should be a file called myname.txt in the user’s home directory that contains the name of the user. My personal policy is to use the Command Module whenever possible and to use the Shell Module if needed.

The Raw Module

Functionally, the Raw Module works like the Shell Module. The key difference is that Ansible doesn’t do any error checking, and STDERRSTDOUT and Return Code is returned. Other than that, Ansible has no idea what happens, because it just executes the command over SSH directly. So while the Shell Module will use /bin/sh by default, the Raw Module just uses whatever the user’s personal default shell might be.

Why would a person decide to use the Raw Module? It doesn’t require Python on the remote computer—at all. Although it’s true that most servers have Python installed by default, or easily could have it installed, many embedded devices don’t and can’t have Python installed. For most configuration management tools, not having an agent program installed means the remote device can’t be managed. With Ansible, if all you have is SSH, you still can execute remote commands using the Raw Module. I’ve used the Raw Module to manage Bitcoin miners that have a very minimal embedded environment. It’s a powerful tool, and when you need it, it’s invaluable!

Copy Module

Although it’s certainly possible to do file and folder manipulation with the Command and Shell Modules, Ansible includes a module specifically for copying files to the server. Even though it requires learning a new syntax for copying files, I like to use it because Ansible will check to see whether a file exists, and whether it’s the same file. That means it copies the file only if it needs to, saving time and bandwidth. It even will make backups of existing files! I can’t tell you how many times I’ve used scp and sshpass in a Bash FOR loop and dumped files on servers, even if they didn’t need them. Ansible makes it easy and doesn’t require FOR loops and IP iterations.

The syntax is a little more complicated than with Command, Shell or Raw. Thankfully, as with most things in the Ansible world, it’s easy to understand—for example:


ansible host_or_groupname -b -m copy \
    -a "src=./updated.conf dest=/etc/ntp.conf \
        owner=root group=root mode=0644 backup=yes"

This will look in the current directory (on the Ansible server/workstation) for a file called updated.conf and then copy it to each host. On the remote system, the file will be put in /etc/ntp.conf, and if a file already exists, and it’s different, the original will be backed up with a date extension. If the files are the same, Ansible won’t make any changes.

I tend to use the Copy Module when updating configuration files. It would be perfect for updating configuration files on Bitcoin miners, but unfortunately, the Copy Module does require that the remote machine has Python installed. Nevertheless, it’s a great way to update common files on many remote machines with one simple command. It’s also important to note that the Copy Module supports copying remote files to other locations on the remote filesystem using the remote_src=true directive.

File Module

The File Module has a lot in common with the Copy Module, but if you try to use the File Module to copy a file, it doesn’t work as expected. The File Module does all its actions on the remote machine, so src and dest are all references to the remote filesystem. The File Module often is used for creating directories, creating links or deleting remote files and folders. The following will simply create a folder named /etc/newfolder on the remote servers and set the mode:


ansible host_or_groupname -b -m file \
       -a "path=/etc/newfolder state=directory mode=0755"

You can, of course, set the owner and group, along with a bunch of other options, which you can learn about on the Ansible doc site. I find I most often will either create a folder or symbolically link a file using the File Module. To create a symlink:


sensible host_or_groupname -b -m file \
         -a "src=/etc/ntp.conf dest=/home/user/ntp.conf \
             owner=user group=user state=link"

Notice that the state directive is how you inform Ansible what you actually want to do. There are several state options:

  • link — create symlink.
  • directory — create directory.
  • hard — create hardlink.
  • touch — create empty file.
  • absent — delete file or directory recursively.

This might seem a bit complicated, especially when you easily could do the same with a Command or Shell Module command, but the clarity of using the appropriate module makes it more difficult to make mistakes. Plus, learning these commands in ad-hoc mode will make playbooks, which consist of many commands, easier to understand (I plan to cover this in my next article).

File Management

Anyone who manages multiple distributions knows it can be tricky to handle the various package managers. Ansible handles this in a couple ways. There are specific modules for apt and yum, but there’s also a generic module called “package” that will install on the remote computer regardless of whether it’s Red Hat- or Debian/Ubuntu-based.

Unfortunately, while Ansible usually can detect the type of package manager it needs to use, it doesn’t have a way to fix packages with different names. One prime example is Apache. On Red Hat-based systems, the package is “httpd”, but on Debian/Ubuntu systems, it’s “apache2”. That means some more complex things need to happen in order to install the correct package automatically. The individual modules, however, are very easy to use. I find myself just using apt or yum as appropriate, just like when I manually manage servers. Here’s an apt example:


ansible host_or_groupname -b -m apt \
          -a "update_cache=yes name=apache2 state=latest"

With this one simple line, all the host machines will run apt-get update (that’s the update_cache directive at work), then install apache2’s latest version including any dependencies required. Much like the File Module, the state directive has a few options:

  • latest — get the latest version, upgrading existing if needed.
  • absent — remove package if installed.
  • present — make sure package is installed, but don’t upgrade existing.

The Yum Module works similarly to the Apt Module, but I generally don’t bother with the update_cache directive, because yum updates automatically. Although very similar, installing Apache on a Red Hat-based system looks like this:


ansible host_or_groupname -b -m yum \
      -a "name=httpd state=present"

The difference with this example is that if Apache is already installed, it won’t update, even if an update is available. Sometimes updating to the latest version isn’t want you want, so this stops that from accidentally happening.

Just the Facts, Ma’am

One frustrating thing about using Ansible in ad-hoc mode is that you don’t have access to the “facts” about the remote systems. In my next article, where I plan to explore creating playbooks full of various tasks, you’ll see how you can reference the facts Ansible learns about the systems. It makes Ansible far more powerful, but again, it can be utilized only in playbook mode. Nevertheless, it’s possible to use ad-hoc mode to peek at the sorts information Ansible gathers. If you run the setup module, it will show you all the details from a remote system:


ansible host_or_groupname -b -m setup

That command will spew a ton of variables on your screen. You can scroll through them all to see the vast amount of information Ansible pulls from the host machines. In fact, it shows so much information, it can be overwhelming. You can filter the results:


ansible host_or_groupname -b -m setup -a "filter=*family*"

That should just return a single variable, ansible_os_family, which likely will be Debian or Red Hat. When you start building more complex Ansible setups with playbooks, it’s possible to insert some logic and conditionals in order to use yum where appropriate and apt where the system is Debian-based. Really, the facts variables are incredibly useful and make building playbooks that much more exciting.

But, that’s for another article, because you’ve come to the end of the second installment. Your assignment for now is to get comfortable using Ansible in ad-hoc mode, doing one thing at a time. Most people think ad-hoc mode is just a stepping stone to more complex Ansible setups, but I disagree. The ability to configure hundreds of servers consistently and reliably with a single command is nothing to scoff at. I love making elaborate playbooks, but just as often, I’ll use an ad-hoc command in a situation that used to require me to ssh in to a bunch of servers to do simple tasks. Have fun with Ansible; it just gets more interesting from here!

If you’d like more direct training on Ansible (and other stuff) from yours truly, visit me at my DayJob as a trainer for CBT Nuggets. You can get a full week free if you head over to https://cbt.gg/shawnp0wers and sign up for a trial!

The 4 Part Series on Ansible includes:
Part 1 – DevOps for the Non-Dev
Part 2 – Making Things Happen
Part 3 – Playbooks
Part 4 – Putting it All Together

Have a Plan for Netplan

Ubuntu changed networking. Embrace the YAML.

If I’m being completely honest, I still dislike the switch from eth0, eth1, eth2 to names like, enp3s0, enp4s0, enp5s0. I’ve learned to accept it and mutter to myself while I type in unfamiliar interface names. Then I installed the new LTS version of Ubuntu and typed vi /etc/network/interfaces. Yikes. After a technological lifetime of entering my server’s IP information in a simple text file, that’s no longer how things are done. Sigh. The good news is that while figuring out Netplan for both desktop and server environments, I fixed a nagging DNS issue I’ve had for years (more on that later).

The Basics of Netplan

The old way of configuring Debian-based network interfaces was based on the ifupdown package. The new default is called Netplan, and although it’s not terribly difficult to use, it’s drastically different. Netplan is sort of the interface used to configure the back-end dæmons that actually configure the interfaces. Right now, the back ends supported are NetworkManager and networkd.

If you tell Netplan to use NetworkManager, all interface configuration control is handed off to the GUI interface on the desktop. The NetworkManager program itself hasn’t changed; it’s the same GUI-based interface configuration system you’ve likely used for years.

If you tell Netplan to use networkd, systemd itself handles the interface configurations. Configuration is still done with Netplan files, but once “applied”, Netplan creates the back-end configurations systemd requires. The Netplan files are vastly different from the old /etc/network/interfaces file, but it uses YAML syntax, and it’s pretty easy to figure out.

The Desktop and DNS

If you install a GUI version of Ubuntu, Netplan is configured with NetworkManager as the back end by default. Your system should get IP information via DHCP or static entries you add via GUI. This is usually not an issue, but I’ve had a terrible time with my split-DNS setup and systemd-resolved. I’m sure there is a magical combination of configuration files that will make things work, but I’ve spent a lot of time, and it always behaves a little oddly. With my internal DNS server resolving domain names differently from external DNS servers (that is, split-DNS), I get random lookup failures. Sometimes ping will resolve, but dig will not. Sometimes the internal A record will resolve, but a CNAME will not. Sometimes I get resolution from an external DNS server (from the internet), even though I never configure anything other than the internal DNS!

I decided to disable systemd-resolved. That has the potential to break DNS lookups in a VPN, but I haven’t had an issue with that. With resolved handling DNS information, the /etc/resolv.conf file points to 127.0.0.53 as the nameserver. Disabling systemd-resolved will stop the automatic creation of the file. Thankfully, NetworkManager itself can handle the creation and modification of /etc/resolv.conf. Once I make that change, I no longer have an issue with split-DNS resolution. It’s a three-step process:

  1. Do sudo systemctl disable systemd-resolved.service.
  2. Then sudo rm /etc/resolv.conf (get rid of the symlink).
  3. Edit the /etc/NetworkManager/NetworkManager.conf file, and in the [main] section, add a line that reads DNS=default.

Once those steps are complete, NetworkManager itself will create the /etc/resolv.conf file, and the DNS server supplied via DHCP or static entry will be used instead of a 127.0.0.53 entry. I’m not sure why the resolved dæmon incorrectly resolves internal addresses for me, but the above method has been foolproof, even when switching between networks with my laptop.

Netplan CLI Configuration

If Ubuntu is installed in server mode, it is almost certainly configured to use networkd as the back end. To check, have a look at the /etc/netplan/config.yaml file. The renderer should be set to networkd in order to use the systemd-networkd back end. The file should look something like this:


network:
  version: 2
  renderer: networkd
  ethernets:
    enp2s0:
      dhcp4: true

Important note: remember that with YAML files, whitespace matters, so the indentation is important. It’s also very important to remember that after making any changes, you need to run sudo netplan apply so the back-end configuration files are populated.

The default renderer is networkd, so it’s possible you won’t have that line in your configuration file. It’s also possible your configuration file will be named something different in the /etc/netplan folder. All .conf files are read, so it doesn’t matter what it’s called as long as it ends with .conf. Static configurations are fairly simple to set up:


network:
  version: 2
  renderer: networkd
  ethernets:
    enp2s0:
      dhcp4: no
      addresses:
        - 192.168.1.10/24
        - 10.10.10.10/16
      gateway4: 192.168.1.1
      nameservers:
        addresses: [192.168.1.1, 8.8.8.8]

Notice I’ve assigned multiple IP addresses to the interface. Netplan does not support virtual interfaces like enp3s0:0, rather multiple IP addresses can be assigned to a single interface.

Unfortunately, networkd doesn’t create an /etc/resolv.conf file if you disable the resolved dæmon. If you have problems with split-DNS on a headless computer, the best solution I’ve come up with is to disable systemd-resolved and then manually create an /etc/resolv.conf file. Since headless computers don’t usually move around as much as laptops, it’s likely the /etc/resolv.conf file won’t need to be changed. Still, I wish networkd had an option to manage the resolv.conf file the same way NetworkManager does.

Advanced Network Configurations

The configuration formats are different, but it’s still possible to do more advanced network configurations with Netplan:

Bonding:


network:
  version: 2
  renderer: networkd
  bonds:
    bond0:
      dhcp4: yes
      interfaces:
        - enp2s0
        - enp3s0
      parameters:
        mode: active-backup
        primary: enp2s0

The various bonding modes (balance-rractive-backupbalance-xorbroadcast802.3adbalance-tlb and balance-alb) are supported.

Bridging:


network:
  version: 2
  renderer: networkd
  bridges:
    br0:
      dhcp4: yes
      interfaces:
        - enp4s0
        - enp3s0

Bridging is even simpler to set up. This configuration creates a bridge device using the two interfaces listed. The device (br0) gets address information via DHCP.

CLI Networking Commands

If you’re a crusty old sysadmin like me, you likely type ifconfig to see IP information without even thinking. Unfortunately, those tools are not usually installed by default. This isn’t actually the fault of Ubuntu and Netplan; the old ifconfig toolset has been deprecated. If you want to use the old ifconfig tool, you can install the package:


sudo apt install net-tools

But, if you want to do it the “correct” way, the new “ip” tool is the proper way to do it. Here are some equivalents of things I commonly do with ifconfig:

Show network interface information.

Old way:


ifconfig

New way:

ip address show

(Or you can just do ip a, which is actually less typing than ifconfig.)

Bring interface up.

Old way:

ifconfig enp3s0 up

New way:

ip link set enp3s0 up

Assign IP address.

Old way:

ifconfig enp3s0 192.168.1.22

New way:

ip address add 192.168.1.22 dev enp3s0

Assign complete IP information.

Old way:


ifconfig enp3s0 192.168.1.22 net mask 255.255.255.0 broadcast
 ↪192.168.1.255

New way:


ip address add 192.168.1.22/24 broadcast 192.168.1.255
 ↪dev enp3s0

Add alias interface.

Old way:


ifconfig enp3s0:0 192.168.100.100/24

New way:


ip address add 192.168.100.100/24 dev enp3s0 label enp3s0:0

Show the routing table.

Old way:


route

New way:


ip route show

Add route.

Old way:


route add -net 192.168.55.0/24 dev enp4s0

New way:


ip route add 192.168.55.0/24 dev enp4s0

Old Dogs and New Tricks

I hated Netplan when I first installed Ubuntu 18.04. In fact, on the particular server I was installing, I actually started over and installed 16.04 because it was “comfortable”. After a while, curiosity got the better of me, and I investigated the changes. I’m still more comfortable with the old /etc/network/interfaces file, but I have to admit, Netplan makes a little more sense. There is a single “front end” for configuring networks, and it uses different back ends for the heavy lifting. Right now, the only back ends are the GUI NetworkManager and the systemd-networkd dæmon. With the modular system, however, that could change someday without the need to learn a new way of configuring interfaces. A simple change to the renderer line would send the configuration information to a new back end.

With regard to the new command-line networking tool (ip vs. ifconfig), it really behaves more like other network devices (routers and so on), so that’s probably a good change as well. As technologists, we need to be ready and eager to learn new things. If we weren’t always trying the next best thing, we’d all be configuring Trumpet Winsock to dial in to the internet on our Windows 95 machines. I’m glad I tried that new Linux thing, and while it wasn’t quite as dramatic, I’m glad I tried Netplan as well!

If you’re interested in learning from me directly, my day job is a Linux trainer at CBT Nuggets. There’s TONS of training available, on Linux, Cisco, Microsoft, etc., and you get a full week free when you sign up. It’s like drinking from the firehose of tech knowledge! https://cbt.gg/shawnp0wers

Password Managers. Yes You Need One.

If you can remember all of your passwords, they’re not good passwords.

I used to teach people how to create “good” passwords. Those passwords needed to be lengthy, hard to guess and easy to remember. There were lots of tricks to make your passwords better, and for years, that was enough.

That’s not enough anymore.

It seems that another data breach happens almost daily, exposing sensitive information for millions of users, which means you need to have separate, secure passwords for each site and service you use. If you use the same password for any two sites, you’re making yourself vulnerable if any single database gets compromised.

There’s a much bigger conversation to be had regarding the best way to protect data. Is the “password” outdated? Should we have something better by now? Granted, there is two-factor authentication, which is a great way to help increase the security on accounts. But although passwords remain the main method for protecting accounts and data, there needs to be a better way to handle them—that’s where password managers come into play.

The Best Password Manager

No, I’m not burying the lede by skipping to all the reviews. As Doc Searls, Katherine Druckman and myself discussed in Episode 8 of the Linux Journal Podcast, the best password manager is the one you use. It may seem like a cheesy thing to say, but it’s a powerful truth. If it’s more complicated to use a password manager than it is to re-use the same set of passwords on multiple sites, many people will just choose the easy way.

Sure, some people are geeky enough to use a password manager at any cost. They understand the value of privacy, understand security, and they take their data very seriously. But for the vast majority of people, the path of least resistance is the way to go. Heck, I’m guilty of that myself in many cases. I have a Keurig coffee machine, not because the coffee is better, but because it’s more convenient. If you’ve ever eaten a Hot Pocket instead of cooking a healthy meal, you can understand the mindset that causes people to make poor password choices. If the goal is having smart passwords, it needs to be easier to use smart passwords than to type “password123” everywhere.

The Reason It Might Work Now

Mobile devices have become the way most people do most things online. Heck, Elon Musk said that we’ve become cybernetic beings, it’s just that the bandwidth to our cybernetic components is really slow (that is, typing on our phones). It’s always been possible to have some sort of password management app on your phone, but until recently, the operating systems didn’t integrate with password managers. That meant you’d have to go from one app into your password manager, look up the site/app, copy the password, switch back to the app, paste the password, and then hope you got it right. Those days are thankfully in the past.

Both recent Android systems and iOS (Apple, not Cisco) versions allow third-party password managers to integrate directly into the data entry system. That means when you’re using a keyboard to type in a login or password, in any app, you can pull in a password manager and enter the data directly with no app switching. Plus, if you have biometrics enabled, most of the time you can unlock your password database with a fingerprint or a view of your face. (For those concerned about the security of biometric-only authentication, it can, of course, be turned off, but remember how important ease of use is for most people!)

So although password managers have been around for years and years, I truly believe it’s only with the advent of their integration into the main operating system of mobile devices that people will actually be able to use them widely. Not all Linux users will agree with me, and not all people in general will want their passwords available in such an easy manner. For the purpose of this article, however, a mobile option is a necessity.

A Tale of Two Concepts

Remember when “the cloud” was a buzzword that didn’t really mean anything specific, but people used it all the time anyway? Well, now it very clearly means servers or services run on computers you don’t own, in data centers you don’t control. The “cloud” is both awesome and terrible. When it comes to storing password data, many people are rightfully concerned about cloud storage. When it comes to password managers, there are basically two types: the kind that stores everything in a local database file and those that store the database in the cloud.

The cloud-based storage isn’t as unsettling as it seems. When the database is stored on the “servers in the sky”, it’s encrypted before it leaves your device. Those companies don’t have access to your actual passwords, just the highly encrypted database that holds them—as long as you trust the companies to be honest about such things. For what it’s worth, I do think the major companies are fairly trustworthy about keeping their grubby mitts off your actual passwords. Still, with the closed-source options, a level of trust is required that some people just aren’t willing to give. I’m going to look at password managers from both camps.

The Contenders

I picked five(-ish) password managers for this review. Please realize there are dozens and dozens of very usable, very secure, password managers for Linux. Some are command-line only. Some are just basic PGP encryption of text files containing user name/password pairs. Today’s review is not meant to be all-encompassing; it’s meant to be helpful for average Linux users who want to handle their passwords better than they currently do. I say five(-ish), because one of the entries has multiple versions. The list is:

  1. KeePass/KeePassX/KeePassXC: this is the one(-ish) that has multiple variations on the same theme. More details later.
  2. 1Password.
  3. LastPass.
  4. Bitwarden.
  5. Browser.

I highlight each of these in this article, in no particular order.

Your Browser’s Password Database

Most people don’t consider using their browser as a password manager a good idea. I’m one of those people. Depending on the browser, the version and the settings you choose, your passwords might not even be encrypted. There is also the problem of using those passwords in other apps. Granted, if you use Chrome, your Android phone likely will be able to access the passwords for you to use in other apps, but I’m simply not convinced the browser is the best place to store your passwords.

I’m sure the password storage feature of modern browsers is more secure than in the past, but a browser’s main function isn’t to secure your passwords, so I wouldn’t trust it to do so. I mention this option because it’s installed by default with every browser. It’s probably the most widely used option, and that breaks my heart. It’s too easy to click “save my password” and conveniently have your password filled in the next time you visit.

Is using the browser’s “save password” function better than using nothing at all? Maybe. It does allow people to use different passwords, trusting the browser to remember them. But, that’s about it. I’m sure the latest browsers have the option to secure the passwords a bit, but it’s not that way by default. I know this, because when I sit at my wife’s computer, I simply start her browser (Chrome), and all her passwords are filled in for me when I visit various websites. They’ve almost made it too easy to use poor security practices. The only hope is to have better options that are even easier—and I think we actually do. Keep reading!

The KeePass Kraziness

First off, these password managers are the ones that use a local, non-cloud-based database for storing passwords. If the thought of your encrypted passwords living on someone else’s servers offends your sensibilities, this is probably the best choice for you. And it is a really good choice, whichever flavor you pick.

The skinny on the various programs that share similar names is that originally, there was KeePass. It didn’t have a Linux version, so there was another program, KeePassX, that used an identical (and fully compatible) database. KeePassX runs natively on Linux, along with the other major OSes. To complicate issues, KeePass then released a Linux version, which runs natively, but it uses Mono libraries. It runs, and it runs fine, but Mono is a bit kludgy on Linux, so most folks still used KeePassX. Then KeePassXC came around, because the KeePassX program was getting a little long in the tooth, and it hadn’t been updated in a long time. So now, there are three programs, all of which work natively on Linux, and all of which are perfectly acceptable programs to use. I prefer KeePassXC (Figure 1), but only because it seems to be most actively developed. The good news is, all three programs can use the exact same database file. Really. If there is a single ray of sunshine on a messy situation, it’s that.

""

Figure 1. KeePassXC has a friendly, native Linux interface.

KeePass(X/XC) Features:

  • Local database file, with no syncing mechanism.
  • Database can be synced by a third party (such as Dropbox).
  • Supports master password and/or keyfile unlocking.
  • Very nice password generator (Figure 2).
  • Secure localhost-only browser integration (KeePassHTTP).

KeePass(X/XC) Pros:

  • No cloud storage.
  • Command-line interface included.
  • 2FA abilities (YubiKey).
  • Open source.
  • No “premium” features, everything is free.

KeePass(X/XC) Cons:

  • No cloud storage (yes, it’s a pro and a con, depending).
  • Brand confusion with multiple variations.
  • Requires third-party Android/iOS app for mobile use.
  • More complicated than cloud-based alternatives (file to sync/copy).
""

Figure 2. The KeePassXC password generator is awesome. I don’t even use KeePassXC for my password manager, but I still like the generator!

The KeePass family of password managers is arguably the most open-source-minded option of those I cover here. Depending on the user, to handle syncing/copying the database rather than depending on an unknown third party to store the data has a traditional Linux feel. For those folks who are most concerned about their data integrity, a KeePass database is probably the best option. Thankfully, due to third-party tools like KeePass2Droid (for Android) and MiniKeePass/KyPass for iOS, it’s possible to use your database on mobile devices as well. In fact, most apps handle syncing your database for you.

Bitwarden

I didn’t know the Bitwarden password manager even existed until we did a Twitter poll asking what password managers LJ readers used. I have to admit, it’s an impressive system, and it ticks almost all the “feel good” boxes Linux users would want (Figure 3). Not only is it open source, but also the non-premium offering is a complete system. Yes, there is a premium option for $10/year, but the non-paid version isn’t crippled in any way.

""

Figure 3. Bitwarden is very well designed, and with its open-source nature, it’s hard to beat.

Bitwarden does store your data in its own cloud servers, but since the software is open source, you can examine the code to make sure the company isn’t doing anything underhanded. Bitwarden also has its own apps for Android/iOS and extensions for all major browsers. There’s no need to use a third-party tool. In fact, it even includes command-line tools for those folks who want to access the database in a text-only environment.

Bitwarden Features:

  • Open-source.
  • Cloud-based storage.
  • Decent password generator.
  • Native apps for Linux, Windows, Mac, Android and iOS.
  • Browser extensions for all major browsers.
  • Options to store logins, secure notes, credit cards and so on.

Bitwarden Pros:

  • One developer for all apps.
  • Open-source!
  • Cloud-based access.
  • Works offline if the “cloud” is unavailable.
  • Free version isn’t crippled.
  • Browser plugin works very well.

Bitwarden Cons:

  • Database is stored in the cloud (again, it’s a pro and a con, depending).
  • Some 2FA options require the Premium version.

Bitwarden Premium Version:

  • $10/year.
  • Additional 2FA options.
  • 1GB encrypted storage.

I’ll admit, Bitwarden is very, very impressive. If I had to pick a personal favorite, it probably would be this one. I’m already using a different option, and I’m happy with it, but if I were starting from scratch, I’d probably choose Bitwarden.

1Password

1Password is a widely used program for password management. But honestly, I’m not sure why. Don’t get me wrong; it works well, and it has great features. The problem is that I can’t find any features it has over the alternatives, and there isn’t a free option at all.

There’s also no native Linux application, but the 1PasswordX browser extension works well under Linux, and it’s user-friendly enough to use for things other than browser login needs. Still, although I don’t begrudge the company for charging a fee for the service, the alternatives offer significant services for free, and that’s hard to beat. Finally, 1Password utilizes a “secret key” that’s required on each device to log in. Although it is an additional layer of security, in practice, it’s a bit of a pain to install on each device.

1Password Features:

  • Cloud-based storage.
  • Non-login data encryption (Figure 4).
  • Printable “emergency kit” for recovering account.
  • Cross-platform browser extension.
  • Offline access.

1Password Pros:

  • Easy-to-use interface.
  • Very good browser integration.

1Password Cons:

  • $3/month, no free features.
  • Secret-key system can be cumbersome.
  • No native Linux app.
  • Proprietary, closed-source code.

1Password Premium Features:

  • All features require a monthly subscription.
""

Figure 4. 1Password has a great interface, and it stores lots of data.

If there weren’t any other password managers out there, 1Password would be incredible. Unfortunately for the 1Password company, there are other options, several of which are at least as good. I will admit, I really liked the browser extension’s interface, and it handled inserting login information into authentication fields very well. I’m not convinced it’s enough for the premium price, however, especially since there isn’t a free option at all.

LastPass

Okay, first I feel I should admit that LastPass is the password manager I use (Figure 5). As I mentioned previously, if I were to start over from scratch, I’d probably choose Bitwarden. That said, LastPass keeps getting better, and its integration with browsers, mobile devices and native operating systems is pretty great.

""

Figure 5. I seldom use anything other than LastPass’s browser extension, unless I’m on my mobile device, but the app looks very similar.

LastPass offers a free tier and a paid tier. Not too long ago, you had to pay for the premium service ($2/month) in order to use it on a mobile device. Recently, however, LastPass opened mobile device syncing and integration into the completely free offering. That is significant, because it brings the free version to the same level as the free version of Bitwarden. (I suspect perhaps Bitwarden is the reason LastPass changed its free tier, but I have no way of knowing.)

LastPass Features:

  • Cloud-based storage.
  • Native apps for Linux, iOS and Android.
  • 2FA.
  • Offline access.
  • Cross-platform browser extension.

LastPass Pros:

  • Cloud-based storage.
  • Very robust free offering.
  • Smoothest browser-based password saving (in my experience).

LastPass Cons:

  • Data stored in the cloud (yes, it’s a pro and a con, depending).
  • Rumored to have poor support (I’ve never needed it).
  • Proprietary, closed-source code.

LastPass Premium:

  • $2/month.
  • Gives 1GB online file storage.
  • Provides the ability to share passwords.
  • Enhanced 2FA possibilities.
  • Emergency access granting (Figure 6).
""

Figure 6. This is sort of a “deadman’s” switch for emergency access. It allows you to give emergency access to someone, with the ability to revoke that access before it actually happens. Pretty neat!

LastPass is the only option I can give an opinion on based on extended experience. I did try each option listed here for a few days, and honestly, each one was perfectly acceptable. LastPass has been rock-solid for me, and even though it’s not open source, it does work well across multiple platforms.

The Winner?

Honestly, with the options available, especially those highlighted today, it’s hard to lose when picking a password manager. I sort of picked the top managers, and gave an overview of each. There are other, more obscure password managers. There are some options that are Linux-only. I decided to look at options that would work regardless of what platform you find yourself on now or even in the future. Once you pick a solution, migrating is a bit of a pain, so starting with something flexible is ideal.

If you’re concerned about someone else controlling your data (even if it’s encrypted), the KeePass/KeePassX/KeePassXC family is probably your best bet. If you don’t mind trusting others with your data-syncing, LastPass or Bitwarden probably will be ideal. I suppose if you don’t trust “free” products, or if you just really like the layout of 1Password, it’s a viable option. And I guess, in a pinch, using browser password management is better than nothing. But please, be sure the data is encrypted and password-protected.

Finally, even if none of these options are something you’d use on a daily basis, consider recommending one to someone you care about. Keeping track of passwords in a secure, sync-able database is a huge step in living a more secure online lifestyle. Now that mobile devices are taken seriously in the password management world, password managers make sense for everyone—even your non-techie friends and family.

Resources

[NOTE: This post was originally posted on the Linux Journal website. Since Linux Journal is now defunct, and authors own their content, I’m reposting here.]