Squoggle

Mac's tech blog

Category Archives: Sys Admin

SSH Keys

Scenario: You just installed your Linux Server now you want to be able to SSH to that server using your SSH Keys so you don’t have to authenticate via password each time.

Assumption: You have already created your public and private SSH Keys and now you want to copy the Public Key to the Server so you can authenticate with SSH Keys.

The utility we are going to use is ssh-copy-id. This will copy your current user’s public SSH Key to the remote host.

  1. Copy the public SSH Key to the remote host like this:
    $ ssh-copy-id -i ~/.ssh/id_rsa.pub [remote-host]
  2. You will be prompted to enter the password for the New Host. It will copy over your public ssh key from ~/.ssh/id_rsa.pub
  3. You should now be able to ssh to the remote host using keys as your authentication method instead of password.

Install and Set Up Ubuntu 22.04 Server

This page will walk through what I did to install Ubuntu 22.04 Server for use in my home network.

I’m installing it as a Virtual Machine using Virtual Box 7.0, but these instructions should be valid if you are installing on a Physical Machine as well which I will do once I have confirmed this is working the way I expect it to.

Virtual Machine Specs

The Virtual Machine Specs I set up for this are:

Memory4096 MB
Processor2 CPUs
Storage SATA Port 025 GB
NetworkBridged Adapter
Virtual Machine Specs

In VirtualBox 7.0 you can do what is called an Unattended Installation. I believe this is only available for certain Operating Systems but since I have not explored that option fully I am skipping it for now.

Follow these instructions to install Ubuntu 22.04 Server:

  1. Power on the Machine.
  2. When Presented with the GNU GRUB screen, select ‘Try or Install Ubuntu Server‘.
  3. Ubuntu starts the boot process.
  4. When Presented with a Language Selection Menu, select your language.
  5. Your keyboard configuration may already be selected for you. If not select your keyboard and then select ‘Done‘.
  6. Choose the Type of install. For this document I am going to choose ‘Ubuntu Server‘ and also select ‘Search for third-party drivers’.
  7. On the Network connections screen my network settings have been filled in automatically by DHCP. This is satisfactory for me so I choose ‘Done‘.
  8. On the Configure proxy screen you can choose to configure a proxy. This can be common in a corporate environment but in my case as my home network I don’t need to do this. Click ‘Done‘ when satisfied.
  9. On the Configure Ubuntu archive mirror screen you can safely click ‘Done‘ unless you know otherwise.
  10. On the Guided storage configuration screen I chose Use an entire disk and Set up this disk as an LVM Group. I did NOT encrypt. Select ‘Done‘.
  11. On the Storage configuration screen I accepted the summary and selected ‘Done‘.
  12. On the Confirm destructive action dialogue box I selected ‘Continue‘ since this is a new machine and I am confident I am not overwriting anything.
  13. On the Profile setup screen I typed my name, chose a server name, chose a user name and password then selected ‘Done‘.
  14. On the Upgrade to Ubuntu Pro screen select ‘Skip for now‘ then select ‘Continue‘.
  15. On the SSH Setup screen select ‘Install OpenSSH server’, then select ‘Done‘.
  16. On the Third-party drivers screen I see that “No applicable third-party drivers are available locally or online.” so I select ‘Continue‘.
  17. On the Featured Server Snaps screen I’m leaving it all blank. This document is about installing Ubuntu and not about snaps so I may do another document on that later. Select ‘Done‘.
  18. You will see a message that it is installing the system and then security updates. When it is ready you will be able to select ‘Reboot Now‘.
  19. Once you have rebooted you should be given a login prompt. You can now login with the user you created.
  20. When you login you will get some statistics about the system, one of which is the IP address. You can use that IP address to ssh to the host now and do some of the other things outlined in this document.

Additional Resources

Here are additional resources that will be useful in configuring your server.

Additional Useful Tools

Install additional packages:

$ sudo apt install members
$ sudo apt install net-tools

Additional Pages to review

How to Install and Configure an NFS Server on Ubuntu 22.04

How to Install and Configure an NFS Server on Ubuntu 22.04

How to Install a Desktop (GUI) on an Ubuntu Server

https://phoenixnap.com/kb/how-to-install-a-gui-on-ubuntu

How To Sudo without password

Scenario: You just installed your Lunux Server and you are the only person using the server and you want to sudo without having to type your password all the time. This How To will show you one way of accomplishing that task.

This How To assumes you are a member of the sudo group.

  1. Check to see if you are a member of the sudo group:
    $ id
    You should see a list of all the groups you are a member of.
  2. Edit the /etc/sudoers file:
    $ sudo visudo
    This will open the the /etc/sudoers file with the default editor.
  3. There will be a line that looks like this:
    %sudo ALL=(ALL:ALL) ALL
  4. Comment out that line and replace it with a line that looks like this:
    %sudo ALL=(ALL) NOPASSWD: ALL
  5. Save the file.

You should now be able to sudo without being prompted for your password every time.

Install VirtualBox 7.0 on Linux Mint 21.x

This is what I did to install VirtualBox 7.0 on my new Linux Mint 21.1 workstation.

See the VirtualBox Wiki for the deets on VirtualBox 7.0

  1. Ensure your system has been updated:
    $ sudo apt update && sudo apt upgrade -y
  2. Download the VirtualBox GPG Keys:
    $ curl https://www.virtualbox.org/download/oracle_vbox_2016.asc | gpg --dearmor > oracle_vbox_2016.gpg
    $ curl https://www.virtualbox.org/download/oracle_vbox.asc | gpg --dearmor > oracle_vbox.gpg
  3. Import the VirtualBox GPG Keys:
    $ sudo install -o root -g root -m 644 oracle_vbox_2016.gpg /etc/apt/trusted.gpg.d/
    $ sudo install -o root -g root -m 644 oracle_vbox.gpg /etc/apt/trusted.gpg.d/
  4. There does not appear to be an official repository for Linux Mint, but Linux Mint is derived from Ubuntu 22.04 which is code named ‘Jammy’. Add the Jammy VirtualBox Repository to the system:
    $ echo "deb [arch=amd64] http://download.virtualbox.org/virtualbox/debian \
    jammy contrib" | sudo tee /etc/apt/sources.list.d/virtualbox.list
  5. Update the Repositories:
    $ sudo apt update
  6. Install Linux Headers:
    $ sudo apt install linux-headers-$(uname -r) dkms
  7. Install VirtualBox:
    $ sudo apt install virtualbox-7.0
  8. Download the VirtualBox Extension Pack:
    $ cd ~/Downloads
    $ VER=$(curl -s https://download.virtualbox.org/virtualbox/LATEST.TXT)
    $ wget https://download.virtualbox.org/virtualbox/$VER/Oracle_VM_VirtualBox_Extension_Pack-$VER.vbox-extpack
  9. Install the Extension Pack:
    $ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-*.vbox-extpack
  10. You can now launch VirtualBox from the Desktop menu.

Linux Mint 21.x

These are my notes on configuring Linux Mint 21.x.

If you find this and think it is useful, leave a comment and say what you like or don’t like. Keep in mind these are my own notes and are not intended to be a HowTo for the general public.

This installation was done on an Dell Optiplex 7050. I’m also installing on Oracle Virtual Box so I will add some additional steps for that which will be noted as extra steps for Virtual Box.

Disable Secure Boot

I configured the Dell BIOS to have Secure Boot Disabled. It is possible to install this and have Secure Boot Enabled but for my purposes this is simply a hassle that I don’t need and the benefits are negligible for a home computer.

Install Linux Mint 21.x.

As of this writing it is Mint 21.1. I may update these instructions as newer versions come out. Without going into lots of detail on how to install Linux Mint which has been covered in many other HowTos I am just focusing on what I do to configure it to my liking. I am installing on a fresh new disk. I did install multimedia codecs. If you have turned off Secure Boot as mentioned earlier you will not have any additional prompts in this area.

I did select Advanced Features in the Installation Type window and selected to use LVM with the new installation. I did choose to erase disk because this is a new disk and a fresh install. I did choose to encrypt my home directory. Maybe not? Testing without encrypting.

The installation is pretty straight forward and not complicated.

Up and Running

Virtual Box Guest Additions

For Virtual Box Virtual Machine you will need to install Guest Additions

  1. Click Devices
  2. Insert Guest Additions CD image
  3. Click ‘Run’
  4. Type your password

This will install guest additions and allow you to resize your screen on the fly.

First Steps

When you first run Mint you will get a Welcome Screen. On the left click First Steps.

Panel Layout. I like Traditional Panel Layout.

Launch the Update Manager and update everything. You may need to reboot at this point.

Launch Driver Manager and see if you need any drivers. I did not need any.

I’ll talk about System Snapshots a little later.

I will address Firewall a little later as well.

The other items on First Steps are pretty much self explanatory.

Firmware

I get a message when I did the updates that the firmware was outdated. I was able to resolve the issue by doing the following:

$ sudo apt install fwupd
$ fwupdmgr get-updates
$ fwupdmgr update

Then follow the prompts to update. The system will reboot and do the updates then reboot again.

Synergy

I’m putting Synergy first. For me it makes it easier to set up my new machine alongside my old one and use the single keyboard and mouse. That way I don’t have to switch back and forth on the keyboard.

Linux Mint 21 is based on Ubuntu 22.04 LTS. See: https://en.wikipedia.org/wiki/Linux_Mint

Go to https://symless.com/account and sign in. Go to the download page and get the package for Synergy 1. Synergy 2 is no longer supported and is not backwards compatible. Synergy 3 is in beta if interested. Download the Ubuntu 22 package and save it to ~/Downloads.

Install it on both the Server and Client computer. Make sure the same version is on both computers.:

$ cd ~/Downloads
$ sudo apt install ./synergy_1.14.6-snapshot.88fdd263_ubuntu22_amd64.deb

Now from the desktop menu select Synergy and run it.

  • You will be prompted to name the computer. If your computer already has a name then it will suggest the name for you. Click ‘Apply’.
  • You will be prompted to enter your serial key. This can be found on the Account page on the Synergy web site.
  • You will be prompted to select to either ‘Use this computer’s keyboard and mouse…’ or ‘Use another computer’s keyboard and mouse…’. In this case I am using another computer’s keyboard and mouse. Select the appropriate response.
  • Type in the IP address of the Server. Click ‘Connect’
  • You will get a ‘Security Question’ about the Server’s fingerprint. Read that and click ‘Yes’.
  • On the Server side you need to click the ‘Configure Server’ button to configure the layout.
  • If you run into trouble you should go into preferences and un-check ‘Enable TLS encryption’ on both Server and Client and get it working without TLS. Then once it is working switch to TLS.
  • From the new computer’s startup menu find ‘Startup Application’ and add Synergy to startup list. I’ve added a startup delay of about 30 seconds.
  • Once you have everything working correctly you should go to Preferences in both Server and Client and click both ‘Hide on startup’ and ‘Minimize to system tray’. Now you can minimize and not have it open in your task bar.

Sudoers

Edit the /etc/sudoers file so you don’t have to put your password in each time:

$ sudo visudo

There will be a line that looks like this:

%sudo ALL=(ALL:ALL) ALL

Comment out that line and make it look like this:

%sudo ALL=(ALL) NOPASSWD: ALL

Now when you use sudo you will not have to enter your password.

Install OpenSSH Server

Install SSH Server so you can ssh to the host:

$ sudo apt install openssh-server -y

Test ssh to the new host. You may during this process encounter an error regarding an “Offending ECDSA key in ~/.ssh/known_hosts”. This is easily resolved by deleting the referenced line in ~/.ssh/known_hosts.

I’ve also experienced an issue where when attempting to ssh to this new host via name it does not work. SSH via IP address does work. DNS resolution is correct. I even have the host in /etc/hosts. No dice.

I was finally able to resolve the issue by putting an entry into the ssh config file on my SSH from host in the ~/.ssh/config.d/LocalHosts.conf file. The entry in this file looks like this:

Host pop
Hostname 192.168.20.34
ForwardX11 yes
ForwardX11Trusted yes

This seems to have solved the problem. I suspect I have some other conflicting entry in my ssh config files that are preventing this, but I can’t find it.

SSH Keys:

Now that you can ssh to your new host you will want to be able to ssh using your ssh key instead of password. From the remote host do this:

$ ssh-copy-id -i ~/.ssh/id_rsa.pub [newhostname]

You will be prompted to enter the password for the New Host. It will copy over your public ssh key from ~/.ssh/id_rsa.pub. This assumes your public ssh key is indeed ~/.ssh/id_rsa.pub.

You should be able to ssh to the new host now without entering your password.

(Optional) Now copy all the ~/.ssh directory contents from your old host into this new host so you have the keys, the known hosts and authorized keys files from your user on the old host and now have them on your new host.

From the old host:

$ cd ~/.ssh
$ scp -r * [new-host-name]:~/.ssh

Hosts file:

Copy the Home Network section of your /etc/hosts file from the old host to the /etc/hosts file on the new host

Dropbox

Install Dropbox and python3-gpg packages

$ sudo apt install dropbox python3-gpg

Then go to start menu and find Dropbox and run it.

You will get a message that says in order to use Dropbox you must download the proprietary daemon. Click OK

A Web Page will pop up where you enter your credentials. Do so. You can now open the DropBox client in the toolbar.

Install KeepassXC

Keepass XC is the greatest Password Safe in my humble opinion.

Install it:

$ sudo apt install keepassxc -y

Install Chrome

You’ll need Chrome as well

Go to https://www.google.com/chrome/

Click the Download Chrome button. Mine automatically downloaded into ~/Downloads. The 64 bit version was automatically selected.

Install it like this:

$ cd ~/Downloads
$ sudo apt install ./google-chrome-stable_current_amd64.deb

This will automatically install a repository as well for future updates.

Install Signal

Go to https://signal.org/en/download/
Click on Download for Linux and follow the instructions that pop up.

After you install Signal edit the startup line in /usr/share/applications/signal-desktop.desktop to look like this:

Exec=/opt/Signal/signal-desktop --use-tray-icon --no-sandbox %U

Additional Software

There are other software packages I need. I’ll do them one at a time because I don’t want to confuse error message between one package or another:

$ sudo apt install kwrite -y
$ sudo apt install kate -y
$ sudo apt install terminator -y
$ sudo apt install sshuttle -y
$ sudo apt install vim -y
$ sudo apt install sshpass -y
$ sudo apt install nfs-common -y
$ sudo apt install gparted -y
$ sudo apt install imagemagick -y
$ sudo apt install whois -y
$ sudo apt install lsscsi -y

Mount NFS Share

Create a mount point:

$ cd ~
$ mkdir -p mnt/[nfs-server-host-name]

Edit /etc/fstab and add these lines:

# External Mounts
[nfs-server-host-name]:[path-to-nfs-export] /home/[your-user]/mnt/[nfs-server-host-name] nfs rw,soft,noauto 0 0

Edit /etc/hosts and add the IP address of Serenity.

Then mount the NFS share:

$ sudo mount [nfs-server-host-name]:[path-to-nfs-export]

You will need to modify the firewall rule on the NFS server to allow connections from your new host before this will work.
https://squoggle.wordpress.com/2020/05/04/iptables/

Mount External Hard Drive

See what device your External USB device shows up as:

$ lsscsi
[0:0:0:0] disk ATA Samsung SSD 860 4B6Q /dev/sda
[1:0:0:0] cd/dvd HL-DT-ST DVD+-RW GU90N A1C2 /dev/sr0
[4:0:0:0] disk WD Elements 25A1 1018 /dev/sdb

In my case it shows up as /dev/sdb
Edit your /etc/fstab file and make an entry like this:

# Western Digital Elements Backup Drive
/dev/sdb1    /home/mac/mnt/WD    ntfs    rw,relatime,user_id=0,group_id=0,allow_other   0 0

Create a mount point for the External Hard Drive

$ mkdir -p ~/mnt/WD

Then mount

$ sudo mount -a

Something else here.

Install Slack:

Go to https://slack.com/downloads/linux
Download the .deb 64 bit package into your ~/Downloads directory.
Then install it:

$ cd ~/Downloads
$ sudo apt install ./slack-desktop-4.29.149-amd64.deb

Crossover

Get the most recent version of Crossover here:
https://www.codeweavers.com/crossover

Get the free trial and download to your machine.

Then install like this:

$ sudo apt install ./crossover_[version-number].deb

Before you attempt to run any bottle you will need to install this library:

$ sudo apt-get install liblcms2-2:i386

This will install a bunch of other dependencies as well.

To export a bottle from one machine to another, in this case Quicken, which is the only reason for running Crossover, do this:

  1. Open Crossover
  2. Right Click on the Quicken Bottle.
  3. Choose ‘Export Quicken 2017 to Archive’
  4. Choose a location to save it. It is a good idea to time stamp the file to not overwrite a previous working bottle export.
  5. On the new machine go to Menu > Bottle > Import Bottle Archive
  6. Browse to where you stored the archive, click it and click ‘Restore’.
  7. I get a message that CrossOver needs to install several Linux packages in order to run Windows applications. Click Yes. This will install a butt load of libraries and dependencies.
  8. You may actually think it is stuck but when it seems to stop doing something see if the ‘Continue’ button is active and if so, click it.
  9. The process will sit there for a bit acting like it is stuck. Just be patient.
  10. Finally your bottle should be imported.
  11. Make your symlinks to your datafiles to your home directory because Crossover has issues with finding files that are deep.
  12. Crossover only needs your email address and login password to register. There is no serial number.

Surprisingly this was the first time importing a bottle worked flawlessly. This is a new version on new machine so maybe they worked the kinks out of it.

VueScan

Get the latest version here:

https://www.hamrick.com/alternate-versions.html

Profile

Modify your profile.

Edit ~/.bashrc and change

alias ll='ls -alF'

to

alias ll='ls -lF'

Set your $PATH to include ~/bin

# Set your path to inclue $HOME/bin
PATH="$HOME/bin:$PATH"

Save the file and then source it like this:

$ source ~/.bashrc

Additional Packages

Here’s a way you can see what packages you have on your old machine and compare to what you have on your new machine.

On the old machine do:

$ sudo apt list --installed | cut -f1 -d/ | sort > installed.[old-hostname]

Then on the new machine do:

$ sudo apt list --installed | cut -f1 -d/ | sort > installed.[new-hostname]

Then SCP the installed.[new-hostname] file to the old host and then compare them like this:

$ diff installed.gob installed.pop | grep ‘<‘

This will give you a list of packages that are installed on the old host but not on the new host. It turns out I had quite a few. Go thru the list and see what you need on the new.

The majority of the packages you find will probably be dependencies for some other package you installed. If you don’t know what a package is for you can easily check information about it with:

$ apt show [package-name]

The majority of the packages I found this way are libraries that are dependencies for other packages I have installed over time.

I found a few packages that I think are useful and should probably be installed:

alien
gimp
gparted
git
mlocate
nmap
traceroute

This is a short list of many.

Other Must See Pages

At this point you should be up and running and ready to work. However there are a lot more things that I typically use on a day to day basis when using Linux Mint.

This list is not an extensive list but may be of help:

Install VirtualBox 7.0 on Linux Mint 21.x

Key Store Explorer

Installing ZenMap in UBUNTU 22.04

How to Install Zenmap on Ubuntu 22.04

How to install Proton VPN on Linux Mint

How to use the Proton VPN Linux app

Install JetBrains Toolbox App Then use the Toolbox to install PyCharm and DataGrip


Online Certificate Status Protocol (OCSP)

Online Certificate Status Protocol (OCSP) is an alternative method to Certificate Revocation Lists (CRLs) used to check the validity of digital certificates in a public key infrastructure (PKI).

When a user encounters a digital certificate, their software can use OCSP to send a request to the certificate authority (CA) to check the current status of the certificate. The CA responds to the request with one of three responses: “good”, “revoked”, or “unknown”.

If the response is “good”, the user’s software can proceed with the transaction or access to the resource protected by the certificate. If the response is “revoked”, the software rejects the certificate as invalid. If the response is “unknown”, the software may require additional steps to verify the validity of the certificate.

Unlike CRLs, which can become large and unwieldy as the number of revoked certificates increases, OCSP allows for more efficient and timely checking of individual certificates. However, it requires a constant connection to the CA to receive real-time status updates and can be subject to performance and privacy concerns.

The Good about OCSP

  • Real-time validation: OCSP provides real-time validation of certificates, so users can immediately determine whether a certificate is valid or not.
  • Smaller and more efficient: OCSP responses are typically smaller and more efficient than certificate revocation lists (CRLs), especially for large PKIs with many revoked certificates.
  • Reduced latency: OCSP can reduce latency by eliminating the need for users to download and parse large CRL files.
  • More privacy-friendly: OCSP can be more privacy-friendly than CRLs, as it doesn’t require users to download a complete list of revoked certificates and associated information.

The Bad about OCSP

  • Increased network traffic: OCSP requires users to contact the certificate authority (CA) server each time a certificate is validated, which can increase network traffic and cause performance issues.
  • Single point of failure: OCSP relies on a single CA server for validation, so if the server goes down or experiences issues, users may be unable to validate certificates.
  • Reduced reliability: OCSP may be less reliable than CRLs in certain situations, such as when there are issues with the CA’s OCSP server or network connectivity.
  • Potential privacy concerns: While OCSP can be more privacy-friendly than CRLs, it still allows the CA to track which certificates are being validated and when, which may be a concern for some users.

Check the OCSP status of a Certificate

You can check an Online Certificate Status Protocol (OCSP) response with OpenSSL using the openssl ocsp command. Here is an example command:

openssl ocsp -issuer issuer_cert.pem -cert certificate.pem -url http://ocsp.server.com -text

This command checks the status of the certificate in certificate.pem by sending an OCSP request to the server at http://ocsp.server.com. The issuer_cert.pem file is the certificate of the issuer that signed the certificate.pem file. The -text option displays the response in human-readable text.

After running the command, you will receive an OCSP response that includes the status of the certificate. If the status is “good”, the certificate is valid. If the status is “revoked”, the certificate has been revoked by the issuer. If the status is “unknown”, the server was unable to provide a definitive response for the certificate.

Get the Certificate from a Site:

Lets use google.com as an example.

Get the Certificate for google.com and save it to a file named certificate.pem:

openssl s_client -connect google.com:443 -showcerts /dev/null | sed -n '/Certificate/,/-----END CERTIFICATE-----/p' | tail -n +3 > certificate.pem

Get the Issuing Cert from a Site:

Get the issuing certificate for google.com and save it to a file named issuer.pem:

openssl s_client -connect google.com:443 -showcerts /dev/null | sed -n '/1 s:/,/-----END CERTIFICATE-----/p' | tail -n +3 > issuer.pem

Extract the OCSP URL from the Certificate:

Use OpenSSL to get the OCSP URL from the Certificate and save it to a variable name ocspurl:

ocspurl=$(openssl x509 -in certificate.pem -noout -text | grep "OCSP" | cut -f2,3 -d:)

Test the OCSP Status of the Certificate:

Check the status of the OCSP status of the certificate using the ocsp flag to OpenSSL like this:

openssl ocsp -issuer issuer.pem -cert certificate.pem -url $ocspurl -text

You should get a response that looks something like this:

OCSP Request Data:
    Version: 1 (0x0)
    Requestor List:
        Certificate ID:
          Hash Algorithm: sha1
          Issuer Name Hash: 12D78B402C356206FA827F8ED8922411B4ACF504
          Issuer Key Hash: A5CE37EAEBB0750E946788B445FAD9241087961F
          Serial Number: 0CD04791FC985ABB27E20A42A232FDF5
    Request Extensions:
        OCSP Nonce: 
            0410CD24FED402FF2B1D2331485C81AD1C21
OCSP Response Data:
    OCSP Response Status: successful (0x0)
    Response Type: Basic OCSP Response
    Version: 1 (0x0)
    Responder Id: A5CE37EAEBB0750E946788B445FAD9241087961F
    Produced At: Apr 26 00:54:27 2023 GMT
    Responses:
    Certificate ID:
      Hash Algorithm: sha1
      Issuer Name Hash: 12D78B402C356206FA827F8ED8922411B4ACF504
      Issuer Key Hash: A5CE37EAEBB0750E946788B445FAD9241087961F
      Serial Number: 0CD04791FC985ABB27E20A42A232FDF5
    Cert Status: good
    This Update: Apr 26 00:39:01 2023 GMT
    Next Update: May  2 23:54:01 2023 GMT

    Signature Algorithm: ecdsa-with-SHA256
         30:45:02:20:45:c2:eb:e2:54:23:2a:c5:49:47:c2:f0:0b:cf:
         8d:06:6d:17:62:26:2e:4a:ba:8e:cd:61:bf:dd:af:e8:ea:cb:
         02:21:00:94:bd:5c:33:e7:ac:20:50:d4:15:45:9e:d8:8d:75:
         1a:fb:c5:95:5f:11:c7:b2:88:47:0a:5b:56:d0:3c:89:b5
WARNING: no nonce in response
Response verify OK
certificate.pem: good
	This Update: Apr 26 00:39:01 2023 GMT
	Next Update: May  2 23:54:01 2023 GMT

OpenSSL OCSP Commands Documentation

Online Certificate Status Protocol command

https://www.openssl.org/docs/man3.0/man1/openssl-ocsp.html

Certificate Revocation List (CRL)

Certificate Revocation Lists (CRLs) are used in public key infrastructure (PKI) to identify digital certificates that have been revoked by the certificate authority (CA) before their expiration date.

When a CA revokes a digital certificate, it adds the certificate’s serial number to the CRL. The CRL is then distributed to users who rely on the PKI, such as web browsers and other software that verify digital certificates.

When a user encounters a digital certificate that has been revoked, their software checks the CRL to confirm that the certificate is no longer valid. If the certificate’s serial number is listed on the CRL, the software will reject the certificate and prevent the user from accessing the website or resource protected by the certificate.

CRL Expiration

The client typically gets a new Certificate Revocation List (CRL) from the Certificate Authority (CA) when the existing CRL expires or when there have been changes to the status of certificates that have been revoked.

When a CA revokes a digital certificate, it adds the certificate’s serial number to the CRL. The CRL contains a list of all the revoked certificates, along with their revocation status and the reason for revocation.

The CRL has an expiration date and time, after which it is no longer considered valid. The expiration date is typically set by the CA when the CRL is issued, and it is usually a few days to a few weeks after the issue date. When the CRL is about to expire, the client will check with the CA to obtain a new CRL that is valid for the next period.

In addition to the expiration date, the client may also obtain a new CRL if there are changes to the revocation status of certificates that have been previously listed in the CRL. This can happen if a certificate that was previously revoked is now reinstated, or if a certificate that was previously valid is now revoked.

The client can obtain a new CRL from the CA via various means, such as through online updates or downloads. Some PKIs also use alternative methods of certificate revocation, such as Online Certificate Status Protocol (OCSP), which can provide real-time updates on the status of certificates.

The Good about CRL

  • Offline validation: CRLs can be downloaded and stored offline, allowing users to validate certificates even when they are not connected to the network.
  • No single point of failure: Unlike OCSP, CRLs don’t rely on a single server for validation, so they are less susceptible to single points of failure.
  • Better reliability: CRLs may be more reliable than OCSP in certain situations, such as when the CA’s OCSP server or network connectivity is experiencing issues.
  • Can cover multiple certificates: A single CRL can cover multiple certificates, reducing the amount of data that needs to be downloaded and parsed.

The Bad about CRL

  • Larger size: CRLs can become large and unwieldy as the number of revoked certificates increases, leading to longer download times and increased storage requirements.
  • Increased latency: CRLs can introduce latency into the certificate validation process, as users must download and parse the entire CRL before they can validate a certificate.
  • May be outdated: CRLs are typically updated on a periodic basis, so there is a risk that a certificate may have been revoked between updates and the user may not be aware of it.
  • May present a privacy risk: CRLs can potentially expose information about revoked certificates, which could be used by attackers to gather information about a PKI.

Overall, CRLs can be an effective means of validating certificates in a PKI, especially in situations where offline validation is important or when the number of revoked certificates is relatively small. However, they also have some drawbacks that should be considered, such as larger size, increased latency, and potential privacy risks.

Delta CRL

A Delta Certificate Revocation List (CRL) is a type of CRL that contains only the revoked certificates that have been added or changed since the previous CRL was issued. The Delta CRL is meant to be used in conjunction with the base CRL, which contains the complete list of revoked certificates.

The Delta CRL is a more efficient way of distributing certificate revocation information, as it contains only the changes to the previous CRL, rather than the entire list of revoked certificates. This can significantly reduce the size of the CRL and the time it takes to download and process it.

To use a Delta CRL, the client first downloads the base CRL, which contains the complete list of revoked certificates. The client then downloads the Delta CRL, which contains only the changes since the previous CRL. The client then merges the Delta CRL with the base CRL to obtain a complete and up-to-date list of revoked certificates.

The use of Delta CRLs can help to improve the efficiency of certificate revocation in large PKIs, especially when the number of revoked certificates is high and changes occur frequently. However, the use of Delta CRLs also requires additional management and coordination between the CA and the client, as both parties must ensure that the Delta CRL is properly applied and merged with the base CRL.

Troubleshooting CRL

Sometimes you may need to troubleshoot certificate issues by examining a CRL (Certificate Revocation List)

Download a CRL

These instructions show how you can easily download a CRL from a website. I’ll use https://duckduckgo.com/ in this example.

  1. Open Google Chrome. Navigate to https://duckduckgo.com/. Notice the padlock in the address bar.
  2. Right click on the padlock in the address bar. Click Connection is secure to see the connection details.
  3. Click Certificate is valid to open the certificate details box. Click the Details tab.
  4. In the Certificate Fields box, scroll down and click on CRL Distribution Points. In the Field Value box you will see any URLs associated with the CRL for the Certificate Authority or the Signing Certificate.
  5. Copy and paste the URL into a new window of the browser. You will be prompted to save the file. In my case I downloaded a file named DigiCertTLSRSASHA2562020CA1-4.crl.

Parse the CRL

  1. Open a terminal in the directory where you saved the CRL.
  2. Check to see if the CRL is in DER format or PEM format. Most CRLs are in DER format. If you do a simple head command on the CRL file you will see if it is a DER (binary) file or a PEM file. If it is binary you will see gibberish. If it is a PEM formatted file you will see ,“BEGIN X509 CRL—–”.
  3. Parse the CRL. If the CRL is in DER format use this syntax:
    openssl crl -inform DER -text -noout -in [crl-file] | less
    If the CRL is in PEM format use this syntax:
    openssl crl -inform PEM -text -noout -in [crl-file] | less
  4. You will see a list of all the revoked certificates that were issued by the Issuing Certificate.

OpenSSL CRL Commands Documentation

The OpenSSL CRL commands official documentation:

https://www.openssl.org/docs/man3.0/man1/openssl-crl.html

TLS 1.2 vs. TLS 1.3: Exploring the Key Differences and Advancements in Security

Introduction

Transport Layer Security (TLS) is a widely-used cryptographic protocol that provides secure communications over a computer network, such as the Internet. TLS ensures that the data transmitted between a client and a server is encrypted and protected from eavesdropping and tampering. In this blog post, we will discuss the key differences between TLS 1.2 and TLS 1.3, the latest version of the protocol, and explore how TLS 1.3 offers improved security, performance, and privacy compared to its predecessor.

Faster and More Efficient Handshake Process

One of the most significant improvements in TLS 1.3 is the streamlined and efficient handshake process. In most cases, TLS 1.3 reduces the number of round trips between the client and server to just one, speeding up the connection establishment. This improvement is particularly beneficial for latency-sensitive applications like web browsing, providing a more responsive user experience.

Modern and Secure Cryptographic Algorithms

TLS 1.3 supports only modern and secure cryptographic algorithms, removing outdated and vulnerable ciphers that were still allowed in TLS 1.2. By eliminating weak ciphers and focusing on strong encryption techniques, TLS 1.3 offers better resistance to attacks and cryptographic weaknesses. For example, TLS 1.3 no longer supports the RSA key exchange, which is vulnerable to several attacks.

Mandatory Forward Secrecy

Forward secrecy is a security feature that ensures that even if a server’s private key is compromised, past communication sessions cannot be decrypted. While forward secrecy was optional in TLS 1.2, it is mandatory in TLS 1.3. This is achieved by using ephemeral (short-lived) keys for each session, which are discarded after use, further enhancing the security of the protocol.

Simplified Protocol Design

TLS 1.3 boasts a simpler and cleaner design compared to TLS 1.2, as it has removed many features and options that were either outdated or considered insecure. This streamlined design makes the protocol easier to implement, understand, and analyze, reducing the likelihood of implementation errors and security vulnerabilities.

Zero Round-Trip Time (0-RTT) Resumption

A new feature introduced in TLS 1.3 is the 0-RTT resumption, which allows clients to send encrypted data to a server during the initial handshake, without waiting for the handshake to complete. This can significantly improve performance in certain scenarios, such as when a client is reconnecting to a previously-visited server. However, this feature can also introduce some security risks, and its use should be carefully evaluated.

Conclusion

TLS 1.3 offers several advantages over TLS 1.2, including improved security, performance, and privacy. Its adoption has been growing steadily, and it is now the recommended version for securing communications over the Internet. However, it is important to note that while TLS 1.3 is superior, TLS 1.2 is still considered secure when properly configured with modern ciphers and settings. By understanding the key differences between these two versions, organizations can make informed decisions about their security infrastructure and ensure the highest level of protection for their users.

The TLS 1.2 Handshake Explained: Securing Your Online Data with a Twist

Introduction

Howdy, folks! In today’s digital age, the need for secure online communication is more important than ever. And that’s where the Transport Layer Security (TLS) protocol comes in. It’s the trusty sidekick that keeps your sensitive data safe from prying eyes. In this blog post, we’re going to take a down-home look at the TLS 1.2 handshake process to help you understand how it ensures secure communication between your computer and the websites you visit.

1. The Meet and Greet

When you decide to visit a secure website, your computer (the client) and the website’s server start a friendly little dance called the TLS handshake. The first step of this dance is the “Client Hello” message, where your computer sends a list of its preferred cryptographic algorithms and a random number to the server. It’s sort of like saying, “Howdy, partner! These are the steps I know. What about you?”

2. The Server’s Response

Next, the server picks the best matching cryptographic algorithms and sends a “Server Hello” message back to the client, sharing its own random number. In addition, the server sends its digital certificate, which is like a digital ID card, to prove its identity. It’s the server’s way of saying, “Well, howdy! I reckon we can dance to the same tune. Here’s my ID, just so you know I’m legit.”

3. Checking Credentials

Your computer takes a gander at the server’s certificate and verifies it with the certificate authority (CA) that issued it. If everything checks out, your computer says, “Well, alrighty then! You seem like a fine partner for this dance.”

4. The Secret Handshake

Now that both sides have agreed on the steps, it’s time to create a secret key for encrypting and decrypting the data. Your computer generates a “pre-master secret” and encrypts it with the server’s public key from its certificate. This encrypted pre-master secret is then sent back to the server, which decrypts it with its private key. It’s like sharing a secret handshake that only the two of them will know.

5. Securing the Dance Floor

With the pre-master secret securely exchanged, both your computer and the server derive the same “master secret.” From this master secret, they generate symmetric encryption keys and other required cryptographic material. It’s like setting up a private dance floor, so no one can see or interfere with your moves.

6. The Final Steps

Finally, both the client and the server send “Change Cipher Spec” and “Finished” messages to each other, indicating that they’re ready to start using the newly established encryption keys. It’s like saying, “Alright, partner, let’s start dancing with our new secret steps!”

Conclusion

And there you have it, folks! That’s the TLS 1.2 handshake in a nutshell. This trusty process keeps your online chats safe and sound, ensuring that your sensitive data is encrypted and secure from eavesdroppers. So the next time you visit a secure website or send a confidential email, remember to tip your hat to the hardworking TLS 1.2 handshake that keeps your information safe as houses.

CentOS Drive Testing

My Server was making noises that were uncharacteristic. This is how I tested my hard drives for failure.

  1. Install smartmontools:
    # yum install smartmontools
  2. Get a listing of all your hard drives:
    # lsblk
  3. Run a test on one of the hard drives:
    # smartctl -t short /dev/sda
    You will see something similar to the following:
    smartctl 7.0 2018-12-30 r4883 [x86_64-linux-3.10.0-693.11.6.el7.x86_64] (local build)
    Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
    === START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
    Sending command: "Execute SMART Short self-test routine immediately in off-line mode".
    Drive command "Execute SMART Short self-test routine immediately in off-line mode" successful.
    Testing has begun.
    Please wait 2 minutes for test to complete.
    Test will complete after Fri Sep 23 13:02:21 2022
    Use smartctl -X to abort test.
  4. It will give you a time when you can check the results. When the time has elapsed, come back and check the results like this:
    # smartctl -H /dev/sda
    smartctl 7.0 2018-12-30 r4883 [x86_64-linux-3.10.0-693.11.6.el7.x86_64] (local build)
    Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
    === START OF READ SMART DATA SECTION ===
    SMART overall-health self-assessment test result: PASSED
  5. If the test fails you will see something like this:
    # smartctl -H /dev/sdb
    smartctl 7.0 2018-12-30 r4883 [x86_64-linux-3.10.0-693.11.6.el7.x86_64] (local build)
    Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
    === START OF READ SMART DATA SECTION ===
    SMART overall-health self-assessment test result: FAILED!
    Drive failure expected in less than 24 hours. SAVE ALL DATA.
    Failed Attributes:
    ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
    5 Reallocated_Sector_Ct 0x0033 063 063 140 Pre-fail Always FAILING_NOW 1089
  6. Looks like you need to replace /dev/sdb

How to Replace the Hard drive

This is what I did to replace the hard drive.

  1. Install lshw package:
    # yum install lshw
  2. Now list hardware of type disk:
    # lshw -class disk
    You should get way to much info.
  3. Filter the info with grep like so:
    # lshw -class disk | grep -A 5 -B 6 /dev/sdb
    You should now only get the one drive you are looking for.
    Mine looks like this:
    # lshw -class disk | grep -A 5 -B 6 /dev/sdb
    *-disk:1
    description: ATA Disk
    product: WDC WD1002FAEX-0
    vendor: Western Digital
    physical id: 1
    bus info: scsi@5:0.0.0
    logical name: /dev/sdb
    version: 1D05
    serial: WD-WCATR1933480
    size: 931GiB (1TB)
    capabilities: partitioned partitioned:dos
    configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 signature=000cd438

So it looks like I need to replace a 1TB Western Digital. Fortunately this disk is in a two disk raid array.

Remove the HD from the Raid Array

This is what I did to remove the HD from the Raid Array. Before proceeding back up everything. I do a daily offsite backup so am covered in theory.

  1. Redo the lsblk command from above to confirm which disk is which:
    # lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sdb 8:16 0 931.5G 0 disk
    └─sdb1 8:17 0 931.5G 0 part
    └─md0 9:0 0 931.4G 0 raid1
    └─vg_raid-lv_raid 253:4 0 931.4G 0 lvm /mnt/Raid
    sdc 8:32 0 931.5G 0 disk
    └─sdc1 8:33 0 931.5G 0 part
    └─md0 9:0 0 931.4G 0 raid1
    └─vg_raid-lv_raid 253:4 0 931.4G 0 lvm /mnt/Raid
  2. Remember that the defective disk in this case is /dev/sdb and the good one is /dev/sdc
  3. Write all cache to disk:
    # sync
  4. Set the disk as failed with mdadm:
    # mdadm --manage /dev/md0 --fail /dev/sdb1
    This is the failed partition from /dev/sdb.
    You should see something like this:
    mdadm: set /dev/sdb1 faulty in /dev/md0
  5. Confirm it has been marked as failed:
    # cat /proc/mdstat
    Personalities : [raid1]
    md0 : active raid1 sdc1[1] sdb1[0](F)
    976630464 blocks super 1.2 [2/1] [_U]
    bitmap: 0/8 pages [0KB], 65536KB chunk

    The (F) next to sdb1 indicates Failed.
  6. Now remove the disk with mdadm:
    # mdadm --manage /dev/md0 --remove /dev/sdb1
  7. Now confirm with the cat command as before:
    # cat /proc/mdstat
    Personalities : [raid1]
    md0 : active raid1 sdc1[1]
    976630464 blocks super 1.2 [2/1] [_U]
    bitmap: 0/8 pages [0KB], 65536KB chunk

    Notice that sdb1 is now gone.
  8. You can also confirm this with the lsblk command:
    # lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sdb 8:16 0 931.5G 0 disk
    └─sdb1 8:17 0 931.5G 0 part
    sdc 8:32 0 931.5G 0 disk
    └─sdc1 8:33 0 931.5G 0 part
    └─md0 9:0 0 931.4G 0 raid1
    └─vg_raid-lv_raid 253:4 0 931.4G 0 lvm /mnt/Raid
  9. You can now shutdown the server and replace that hard drive.
    It is easy to find the correct hard drive with the serial number you got from the lshw command you ran earlier. The serial number is: WD-WCATR1933480
  10. Power on server.
  11. Here is where I ran into an issue that left me scratching my head for quite some time. I’m documenting it here so if it happens again I can resolve it quickly.
    It turns out that the spare drive I had on hand I thought was new but was not. It was actually a drive I had installed in another system that was retired and this drive had a boot partition on it. When I booted the server, that was the partition that booted instead of my regular boot partition. I even had to recover passwords on it because the user and root passwords were not the same. All along I was thinking something had happened to bork the users somehow. But it turns out the new drive I had put in was booting and it was not really new. Lesson learned here is to make sure the drive you put in has had any partitions removed. I did this by putting the drive in another system and using fdisk to remove the partitions. Now when I boot the server the normal boot partition boots and this new drive is designated as sdb as I expect.
  12. Now you can copy the partition information from the good disk (/dev/sdc) to the new disk (/dev/sdb). Be warned that this will destroy any partition information on the new disk. Since I already destroyed any partition information in the previous step I’m good with this. The command looks like this:
    # sfdisk -d /dev/sdc | sfdisk /dev/sdb
  13. You can check the partition info is correct with the lsblk command:
    # lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sdb 8:16 0 931.5G 0 disk
    └─sdb1 8:17 0 931.5G 0 part
    sdc 8:32 0 931.5G 0 disk
    └─sdc1 8:33 0 931.5G 0 part
    └─md0 9:0 0 931.4G 0 raid1
    └─vg_raid-lv_raid 253:2 0 931.4G 0 lvm /mnt/Raid
  14. Now you can reverse the process and create the mirror that you previously had like this:
    # mdadm --manage /dev/md0 --add /dev/sdb1
  15. Now you can verify the status of your raid like this:
    # mdadm --detail /dev/md0
# mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Jun 27 17:49:31 2017
        Raid Level : raid1
        Array Size : 976630464 (931.39 GiB 1000.07 GB)
     Used Dev Size : 976630464 (931.39 GiB 1000.07 GB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Sat Sep 24 14:46:35 2022
             State : clean, degraded, recovering 
    Active Devices : 1
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 1

Consistency Policy : bitmap

    Rebuild Status : 1% complete

          Name : Serenity.localdomain:0  (local to host Serenity.localdomain)
          UUID : f06aeaae:e0c9707b:6d982f07:3f320578
        Events : 114297

Number   Major   Minor   RaidDevice State
   2       8       17        0      spare rebuilding   /dev/sdb1
   1       8       33        1      active sync   /dev/sdc1
  • You can see that the ‘Rebuild Status is at 1% and that this is in a rebuilding state.
  • You can get the status of the rebuild like so:
    # cat /proc/mdstat
# cat /proc/mdstat
Personalities : [raid1] 
md0 : active raid1 sdb1[2] sdc1[1]
      976630464 blocks super 1.2 [2/1] [_U]
      [>....................]  recovery =  0.7% (7077312/976630464) finish=129.7min speed=124486K/sec
      bitmap: 8/8 pages [32KB], 65536KB chunk

You can watch this command if it is interesting to you.

There’s something missing here. It probably relates to this:

CentOS 7 created mdadm array disappears after reboot