Squoggle

Mac's tech blog

How To Configure Linux Raid

Scenario: You have a Linux Server that has a boot drive and two additional hard drives you want to configure in a RAID array as data drives. This How To will demonstrate how to create a RAID 1 array using two physical drives in a server that is already up and running.

RAID 1 is disk mirroring. I want to mirror one disk to the other. In case one physical drive fails I can drop the bad drive from the RAID array, install a new drive and then add the new drive to the Array and rebuild the new disk to mirror the existing disk.

RAID can be implemented via hardware or software. With Hardware RAID the RAID configuration is done directly on the hardware. Typically this is configured in the controller itself or in the system’s BIOS. This How To is about Software RAID and is therefore done from the existing running OS.

In this scenario I have an existing running Ubuntu 22.04 LTS Virtual Machine server running and I have added two 1 GB SCSI drives.

List Disks

The first step is to get information about the disks on your system:

$ sudo lshw -class disk
  *-cdrom                   
       description: DVD reader
       product: CD-ROM
       vendor: VBOX
       physical id: 0.0.0
       bus info: scsi@1:0.0.0
       logical name: /dev/cdrom
       logical name: /dev/sr0
       version: 1.0
       capabilities: removable audio dvd
       configuration: ansiversion=5 status=nodisc
  *-disk
       description: ATA Disk
       product: VBOX HARDDISK
       vendor: VirtualBox
       physical id: 0.0.0
       bus info: scsi@2:0.0.0
       logical name: /dev/sda
       version: 1.0
       serial: VB819a3b95-f514a971
       size: 25GiB (26GB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=e2bb1f14-9526-4dd6-be4c-5aabb4b48723 logicalsectorsize=512 sectorsize=512
  *-disk:0
       description: SCSI Disk
       product: HARDDISK
       vendor: VBOX
       physical id: 0.0.0
       bus info: scsi@3:0.0.0
       logical name: /dev/sdb
       version: 1.0
       size: 1GiB (1073MB)
       configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512
  *-disk:1
       description: SCSI Disk
       product: HARDDISK
       vendor: VBOX
       physical id: 0.1.0
       bus info: scsi@3:0.1.0
       logical name: /dev/sdc
       version: 1.0
       size: 1GiB (1073MB)
       configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512

Notice that Disk 0 and Disk 1, both SCSI disks are the two disks that will be used for the RAID 1 device. The single ATA disk is where the operating system is installed on this system. Disk 0’s logical device name is /dev/sdb and Disk 1’s logical device name is /dev/sdc.

Partition Disks

The two disks need to have a partition created on them. I do not plan on having these two data drives be bootable so I do not need to create a master boot record on them.

Use the fdisk tool to create a new partition on each drive and format them as a Linux raid autodetect file system:

$ sudo fdisk /dev/sdb

Follow these instructions:

  1. Type n to create a new partition.
  2. Type p to select primary partition.
  3. Type 1 to create /dev/sdb1.
  4. Press Enter to choose the default first sector
  5. Press Enter to choose the default last sector. This partition will span across the entire drive.
  6. Typing p will print information about the newly created partition. By default the partition type is Linux.
  7. We need to change the partition type, so type t.
  8. Enter fd to set partition type to Linux raid autodetect.
  9. Type p again to check the partition type.
  10. Type w to apply the above changes.

Do the same thing for the second drive:

$ sudo fdisk /dev/sdc

Follow the same procedure as above.

You should now have two raid devices created /dev/sdb1 and /dev/sdc1.

MDADM

The mdadm tool is used to administer Linux MD arrays (software RAID). The mdadm utility can be used to create, manage, and monitor MD (multi-disk) arrays for software RAID or multipath I/O.

View or examine the two devices with mdadm:

$ sudo mdadm --examine /dev/sdb /dev/sdc
$ sudo mdadm --examine /dev/sdb /dev/sdc
/dev/sdb:
   MBR Magic : aa55
Partition[0] :      2095104 sectors at         2048 (type fd)
/dev/sdc:
   MBR Magic : aa55
Partition[0] :      2095104 sectors at         2048 (type fd)

You can see that both are the type fd (Linux raid autodetect). At this stage, there’s no RAID setup on /dev/sdb1 and /dev/sdc1.

You can see that /dev/sdb1 and /dev/sdc1 don’t have RAID set up yet with this command:

$ sudo mdadm --examine /dev/sdb1 /dev/sdc1
$ sudo mdadm --examine /dev/sdb1 /dev/sdc1
mdadm: No md superblock detected on /dev/sdb1.
mdadm: No md superblock detected on /dev/sdc1.

Notice that there are no superblocks detected on those devices.

Create RAID Logical Device

Now create a RAID 1 logical device named /dev/md0 using the two devices /dev/sdb1 and /dev/sdc1:

$ sudo mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1

The –level=mirror flag creates this device as a RAID 1 device.

$ sudo mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Notice that it warns that this device is not suitable for a boot device which is what we intend.

Now check the status of the MD device like this:

$ cat /proc/mdstat

You should see something like this:

$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdc1[1] sdb1[0]
      1046528 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

This shows we now have a new, active MD device designated ‘md0’. It is configured as RAID 1 and is comprised of /dev/sdb1 and /dev/sdc1.

You can get additional information with the following command:

$ sudo mdadm --detail /dev/md0

You should see details that look somewhat like this:

$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Mar 14 20:58:52 2023
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Tue Mar 14 20:58:57 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : firefly:0  (local to host firefly)
              UUID : b501eeef:a0a505d6:16516baa:ea80a5b8
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

This shows the details of the RAID device.

You can get even more details by drilling down to the two RAID devices like this:

$ sudo mdadm --examine /dev/sdb1 /dev/sdc1

You should see something like this:

$ sudo mdadm --examine /dev/sdb1 /dev/sdc1
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : b501eeef:a0a505d6:16516baa:ea80a5b8
           Name : firefly:0  (local to host firefly)
  Creation Time : Tue Mar 14 20:58:52 2023
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 2093056 sectors (1022.00 MiB 1071.64 MB)
     Array Size : 1046528 KiB (1022.00 MiB 1071.64 MB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 08c16292:aa06d46a:a0646f22:7bb0bd42

    Update Time : Tue Mar 14 20:58:57 2023
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : 9cea23ea - correct
         Events : 17


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : b501eeef:a0a505d6:16516baa:ea80a5b8
           Name : firefly:0  (local to host firefly)
  Creation Time : Tue Mar 14 20:58:52 2023
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 2093056 sectors (1022.00 MiB 1071.64 MB)
     Array Size : 1046528 KiB (1022.00 MiB 1071.64 MB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 40d5d968:1b4aefaa:1d2a61a5:57e8d9a1

    Update Time : Tue Mar 14 20:58:57 2023
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : 958a78ee - correct
         Events : 17


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

This shows even more details about the two devices that are configured as part of md0.

Make the configs persistent

If you were to reboot the system now the mdadm device would not be persistent and you would lose your device md0.

To save the configs and make them persistent do the following:

$ sudo mdadm --detail --scan --verbose | sudo tee -a /etc/mdadm/mdadm.conf

This writes appends a couple of lines to the /etc/mdadm/mdadm.conf file.

To make it persistent across reboots do:

$ sudo update-initramfs -u

Reboot the server to ensure the configs are persistent

Once the server has booted, check to see what your MD device is:

$ cat /proc/mdstat
$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active (auto-read-only) raid1 sdc1[1] sdb1[0]
      1046528 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

Your new MD Device md0 still exists as a Raid 1 device.

Configure Logical Volume Management (LVM)

Now you will need to configure LVM on the RAID Device.

Create a Physical Volume on the MD Device:

$ sudo pvcreate /dev/md0
$ sudo pvcreate /dev/md0
  Physical volume "/dev/md0" successfully created.

Confirm that you have a new Physical Volume:

$ sudo pvdisplay /dev/md0
$ sudo pvdisplay /dev/md0
  "/dev/md0" is a new physical volume of "1022.00 MiB"
  --- NEW Physical volume ---
  PV Name               /dev/md0
  VG Name               
  PV Size               1022.00 MiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               7zbtfn-EmYy-L8SI-VGsR-JXLU-yRge-3iddqw

The next step is to create a Volume Group on the Physical Volume. I am going to name the Volume Group ‘vg_raid‘:

$ sudo vgcreate vg_raid /dev/md0
$ sudo vgcreate vg_raid /dev/md0
  Volume group "vg_raid" successfully created

Confirm the Volume Group was created as expected:

$ sudo vgdisplay vg_raid
$ sudo vgdisplay vg_raid
  --- Volume group ---
  VG Name               vg_raid
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1020.00 MiB
  PE Size               4.00 MiB
  Total PE              255
  Alloc PE / Size       0 / 0   
  Free  PE / Size       255 / 1020.00 MiB
  VG UUID               qW9wd6-jGAr-LiV4-3xG0-mm8J-YP02-rEQGMs

Now create the Logical Volume on the Volume Group. I will name the Logical Volume ‘lv_raid‘ I will also use the maximum space allowed on the Volume Group :

$ sudo lvcreate -n lv_raid -l 100%FREE vg_raid
$ sudo lvcreate -n lv_raid -l 100%FREE vg_raid
  Logical volume "lv_raid" created.

Confirm creation of the Logical Volume with the lvdisplay command like so:

$ sudo lvdisplay vg_raid/lv_raid
$ sudo lvdisplay vg_raid/lv_raid
  --- Logical volume ---
  LV Path                /dev/vg_raid/lv_raid
  LV Name                lv_raid
  VG Name                vg_raid
  LV UUID                FZZEIT-stdx-RuAR-vA4Y-MSHY-Itia-NTzG3b
  LV Write Access        read/write
  LV Creation host, time firefly, 2023-03-14 23:45:14 +0000
  LV Status              available
  # open                 0
  LV Size                1020.00 MiB
  Current LE             255
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

Create Filesystem

Now you can create a file system on the Logical Volume. Since the file system type I am using on the other drive is ext4 I will use the same:

$ sudo mkfs.ext4 -F /dev/vg_raid/lv_raid
$ sudo mkfs.ext4 -F /dev/vg_raid/lv_raid
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 261120 4k blocks and 65280 inodes
Filesystem UUID: 40420f4a-f470-47d5-a53d-922829cde8f3
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

Mount the Logical Volume

Now that you have the file system created you are ready to mount it somewhere.

Create a mount point for mounting the Logical Volume. I’m going to create ‘/mnt/Raid‘:

$ sudo mkdir /mnt/Raid

Now you can mount the Logical Volume on the new mount point:

$ sudo mount /dev/vg_raid/lv_raid /mnt/Raid

You can now optionally set the ownership of the mount point:

$ sudo chown mac:mac /mnt/Raid/

Check to see if your Logical Volume is mounted:

$ mount | grep raid

You should see something similar to the following:

$ mount | grep raid
/dev/mapper/vg_raid-lv_raid on /mnt/Raid type ext4 (rw,relatime)

You can see that the /dev/mapper device is /dev/mapper/vg_raid-lv_raid.

Automatically Mount when booting

To automatically mount the Raid Device when booting edit the /etc/fstab file and add a section that looks like this:

# Mount Raid
/dev/mapper/vg_raid-lv_raid   /mnt/Raid   ext4   defaults   0   0

Now un-mount the file system:

$ sudo umount /dev/mapper/vg_raid-lv_raid

Then re-mount with the entry in /etc/fstab:

$ sudo mount -a

Then verify it was remounted:

$ mount | grep raid

You should now be able to reboot the system and it will automatically mount this Raid Device.

Test

Test Auto Mount and Read & Write to filesystem

Now test your setup by writing a test file rebooting, confirm RAID is working and write to the file again.

Create a test file in /mnt/Raid:

$ vi /mnt/Raid/testfile.txt

Write some information to the file and save the file.

Reboot your system:

$ sudo shutdown -r now

Once you have logged back into your system, confirm that the md0 device is active:

$ sudo mdadm --detail /dev/md0

Examine the two partitions that comprise the mdadm device

$ sudo mdadm --examine /dev/sdb1 /dev/sdc1

Confirm the system automatically mounted the Raid device:

$ mount | grep raid

View the test file you created earlier:

$ ls -l /mnt/Raid/

Then confirm it’s content:

$ cat /mnt/Raid/testfile.txt

Then edit the file, add additional text to the file, save it, then confirm it.

$ vi /mnt/Raid/testfile.txt

Test Drive Failure

Now test your setup by simulating a failure in one of the drives.

Remember that we created file system on a Logical Volume on top of a mdadm raid device comprised of two partitions, each partition on it’s own physical device, (/dev/sdb & /dev/sdc). Then we mounted that file system on /mnt/Raid. We can easily see this graphically with the lsblk command like this:

$ lsblk /dev/sdb /dev/sdc

You should see something like the following:

$ lsblk /dev/sdb /dev/sdc
NAME                  MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
sdb                     8:16   0    1G  0 disk  
└─sdb1                  8:17   0 1023M  0 part  
  └─md0                 9:0    0 1022M  0 raid1 
    └─vg_raid-lv_raid 253:1    0 1020M  0 lvm   /mnt/Raid
sdc                     8:32   0    1G  0 disk  
└─sdc1                  8:33   0 1023M  0 part  
  └─md0                 9:0    0 1022M  0 raid1 
    └─vg_raid-lv_raid 253:1    0 1020M  0 lvm   /mnt/Raid

This shows the mount point /mnt/Raid that was created on the Logical Volume which lives on the md0 raid device comprised of two partitions that each live on their own disk.

Lets assume that at some point in the future, disk sdb starts to degrade and will soon fail and you need to replace the disk.

Write all cache to disk:

$ sudo sync

Un-mount the file system:

$ sudo umount /dev/vg_raid/lv_raid

Set the /dev/seb1 partition of md0 as failed:

$ sudo mdadm --manage /dev/md0 --fail /dev/sdb1
$ sudo mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0

Verify partition /dev/sdb1 as faulty:

$ sudo mdadm --detail /dev/md0
$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Mar 14 20:58:52 2023
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Mar 15 20:57:51 2023
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 1
     Spare Devices : 0

Consistency Policy : resync

              Name : firefly:0  (local to host firefly)
              UUID : b501eeef:a0a505d6:16516baa:ea80a5b8
            Events : 27

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       33        1      active sync   /dev/sdc1

       0       8       17        -      faulty   /dev/sdb1

You can also verify with this:

$ cat /proc/mdstat
$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdb1[0](F) sdc1[1]
      1046528 blocks super 1.2 [2/1] [_U]
      
unused devices: <none>

The (F) next to sdb1 indicates it has been marked as failed.

Now you can remove the disk with mdadm like this:

$ sudo mdadm --manage /dev/md0 --remove /dev/sdb1
$ sudo mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md0

Confirm with the cat command like before:

$ cat /proc/mdstat
$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdc1[1]
      1046528 blocks super 1.2 [2/1] [_U]
      
unused devices: <none>

The only partition listed is now sdc1

You can also confirm with the lsblk command like this:

$ lsblk /dev/sdb /dev/sdc
$ lsblk /dev/sdb /dev/sdc
NAME                  MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
sdb                     8:16   0    1G  0 disk  
└─sdb1                  8:17   0 1023M  0 part  
sdc                     8:32   0    1G  0 disk  
└─sdc1                  8:33   0 1023M  0 part  
  └─md0                 9:0    0 1022M  0 raid1 
    └─vg_raid-lv_raid 253:1    0 1020M  0 lvm

You can now shutdown the server and replace that hard drive.

Replace Hard Drive

For a Physical Machine it is easy to find the correct hard drive to remove by referencing the serial number you got from the lshw command you ran at the beginning of this How To. Replace the failed drive and power on the server.

Replace Virtual Hard Drive

For a Virtual Machine running on VirtualBox use these instructions to simulate a new hard drive.

Open the VirtualBox Virtual media Manager: File > Tools > Virtual Media Manager.

Create a New Virtual Hard Disk with the same size and type as the previous other two Virtual Disks. Take note of what you name it and where you store it.

Got to the Storage Settings of your Virtual Machine: Machine > Settings > Storage. Here you will need to remove the “bad” drive from the Virtual Machine and add the new drive. The drives were added as LsiLogic SCSI drives on SCSI Port 0 and Port 1.

In our case Port 0 should be equivalent to /dev/sdb and Port 1 should be equivalent to /dev/sdc.

Since we failed disk /dev/sdb you will need to remove the disk associated with Port 0.

Now add the new disk you created in the previous step and associate it as Port 0.

Save the config and power on the Server.

Restore the Partition

Before proceeding you should confirm that you have a new device named /dev/sdb. Do that like this:

$ sudo fdisk -l /dev/sdb

You should see something like the following:

$ sudo fdisk -l /dev/sdb
Disk /dev/sdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk model: HARDDISK        
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Notice that it is not partitioned.

Duplicate the partition from /dev/sdc to /dev/sdb:

$ sudo sfdisk -d /dev/sdc | sudo sfdisk /dev/sdb

Now that the disk is partitioned you can add it to the Raid Device:

$ sudo mdadm --manage /dev/md0 --add /dev/sdb1
$ sudo mdadm --manage /dev/md0 --add /dev/sdb1
mdadm: added /dev/sdb1

Now verify the status of the Raid Device:

$ sudo mdadm --detail /dev/md0
$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Mar 14 20:58:52 2023
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Thu Mar 16 01:30:32 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : firefly:0  (local to host firefly)
              UUID : b501eeef:a0a505d6:16516baa:ea80a5b8
            Events : 49

    Number   Major   Minor   RaidDevice State
       2       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

For larger drives it may take some time to actually sync the new device.

You can also get a summary of the device like this:

$ cat /proc/mdstat
$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdb1[2] sdc1[1]
      1046528 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

Confirm your test file is still intact:

$ cat /mnt/Raid/testfile.txt

Wrap Up

Wait a minute. What about the whole part where you have to create the Logical Volume, the file system and mount the device, you say? This is a Raid Device. When you added the new device to the Raid device it automatically rebuilt the new device to match the existing one. When you rebooted the server it automatically mounted the Raid Device from the entry in the /etc/fstab. Since it is a Raid Device it was still functioning but with only one disk. When you added the new disk to it, it automatically rebuilt the new disk to be a mirror of the existing disk.

Additional Info

Additional commands that could be important to this How To:

sudo mdadm --stop /dev/md0
sudo mdadm --remove /dev/md0
sudo mdadm --zero-superblock /dev/sdb1 /dev/sdc1

Some valuable links:

blah

Removal of mdadm RAID Devices – How to do it quickly?

How to Set Up Software RAID 1 on an Existing Linux Distribution

Mdadm – How can i destroy or delete an array : Memory, Storage, Backup and Filesystems

Install VirtualBox 7.0 on Linux Mint 21.x

This is what I did to install VirtualBox 7.0 on my new Linux Mint 21.1 workstation.

See the VirtualBox Wiki for the deets on VirtualBox 7.0

  1. Ensure your system has been updated:
    $ sudo apt update && sudo apt upgrade -y
  2. Download the VirtualBox GPG Keys:
    $ curl https://www.virtualbox.org/download/oracle_vbox_2016.asc | gpg --dearmor > oracle_vbox_2016.gpg
    $ curl https://www.virtualbox.org/download/oracle_vbox.asc | gpg --dearmor > oracle_vbox.gpg
  3. Import the VirtualBox GPG Keys:
    $ sudo install -o root -g root -m 644 oracle_vbox_2016.gpg /etc/apt/trusted.gpg.d/
    $ sudo install -o root -g root -m 644 oracle_vbox.gpg /etc/apt/trusted.gpg.d/
  4. There does not appear to be an official repository for Linux Mint, but Linux Mint is derived from Ubuntu 22.04 which is code named ‘Jammy’. Add the Jammy VirtualBox Repository to the system:
    $ echo "deb [arch=amd64] http://download.virtualbox.org/virtualbox/debian \
    jammy contrib" | sudo tee /etc/apt/sources.list.d/virtualbox.list
  5. Update the Repositories:
    $ sudo apt update
  6. Install Linux Headers:
    $ sudo apt install linux-headers-$(uname -r) dkms
  7. Install VirtualBox:
    $ sudo apt install virtualbox-7.0
  8. Download the VirtualBox Extension Pack:
    $ cd ~/Downloads
    $ VER=$(curl -s https://download.virtualbox.org/virtualbox/LATEST.TXT)
    $ wget https://download.virtualbox.org/virtualbox/$VER/Oracle_VM_VirtualBox_Extension_Pack-$VER.vbox-extpack
  9. Install the Extension Pack:
    $ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-*.vbox-extpack
  10. You can now launch VirtualBox from the Desktop menu.

Install Homebrew on Mac OS

The version of Mac OS I’m working with here is 12.6.6 Monterey

Open a terminal and paste in the following:

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

You will be told what Homebrew is going to do and to confirm or abort. Go ahead and Confirm.

You will see a lot of stuff scroll past but in the end you should have Homebrew installed.

Confirm with this:

% brew help

You should see the help screen for Homebrew.

Now you can install packages. I’m in the need of telnet to test network connections so I install like this:

% brew install telnet

sdf

SSH Keys

Scenario: You just installed your Linux Server now you want to be able to SSH to that server using your SSH Keys so you don’t have to authenticate via password each time.

Assumption: You have already created your public and private SSH Keys and now you want to copy the Public Key to the Server so you can authenticate with SSH Keys.

The utility we are going to use is ssh-copy-id. This will copy your current user’s public SSH Key to the remote host.

  1. Copy the public SSH Key to the remote host like this:
    $ ssh-copy-id -i ~/.ssh/id_rsa.pub [remote-host]
  2. You will be prompted to enter the password for the New Host. It will copy over your public ssh key from ~/.ssh/id_rsa.pub
  3. You should now be able to ssh to the remote host using keys as your authentication method instead of password.

Install and Set Up Ubuntu 22.04 Server

This page will walk through what I did to install Ubuntu 22.04 Server for use in my home network.

I’m installing it as a Virtual Machine using Virtual Box 7.0, but these instructions should be valid if you are installing on a Physical Machine as well which I will do once I have confirmed this is working the way I expect it to.

Virtual Machine Specs

The Virtual Machine Specs I set up for this are:

Memory4096 MB
Processor2 CPUs
Storage SATA Port 025 GB
NetworkBridged Adapter
Virtual Machine Specs

In VirtualBox 7.0 you can do what is called an Unattended Installation. I believe this is only available for certain Operating Systems but since I have not explored that option fully I am skipping it for now.

Follow these instructions to install Ubuntu 22.04 Server:

  1. Power on the Machine.
  2. When Presented with the GNU GRUB screen, select ‘Try or Install Ubuntu Server‘.
  3. Ubuntu starts the boot process.
  4. When Presented with a Language Selection Menu, select your language.
  5. Your keyboard configuration may already be selected for you. If not select your keyboard and then select ‘Done‘.
  6. Choose the Type of install. For this document I am going to choose ‘Ubuntu Server‘ and also select ‘Search for third-party drivers’.
  7. On the Network connections screen my network settings have been filled in automatically by DHCP. This is satisfactory for me so I choose ‘Done‘.
  8. On the Configure proxy screen you can choose to configure a proxy. This can be common in a corporate environment but in my case as my home network I don’t need to do this. Click ‘Done‘ when satisfied.
  9. On the Configure Ubuntu archive mirror screen you can safely click ‘Done‘ unless you know otherwise.
  10. On the Guided storage configuration screen I chose Use an entire disk and Set up this disk as an LVM Group. I did NOT encrypt. Select ‘Done‘.
  11. On the Storage configuration screen I accepted the summary and selected ‘Done‘.
  12. On the Confirm destructive action dialogue box I selected ‘Continue‘ since this is a new machine and I am confident I am not overwriting anything.
  13. On the Profile setup screen I typed my name, chose a server name, chose a user name and password then selected ‘Done‘.
  14. On the Upgrade to Ubuntu Pro screen select ‘Skip for now‘ then select ‘Continue‘.
  15. On the SSH Setup screen select ‘Install OpenSSH server’, then select ‘Done‘.
  16. On the Third-party drivers screen I see that “No applicable third-party drivers are available locally or online.” so I select ‘Continue‘.
  17. On the Featured Server Snaps screen I’m leaving it all blank. This document is about installing Ubuntu and not about snaps so I may do another document on that later. Select ‘Done‘.
  18. You will see a message that it is installing the system and then security updates. When it is ready you will be able to select ‘Reboot Now‘.
  19. Once you have rebooted you should be given a login prompt. You can now login with the user you created.
  20. When you login you will get some statistics about the system, one of which is the IP address. You can use that IP address to ssh to the host now and do some of the other things outlined in this document.

Additional Resources

Here are additional resources that will be useful in configuring your server.

Additional Useful Tools

Install additional packages:

$ sudo apt install members
$ sudo apt install net-tools

Additional Pages to review

How to Install and Configure an NFS Server on Ubuntu 22.04

How to Install and Configure an NFS Server on Ubuntu 22.04

How to Install a Desktop (GUI) on an Ubuntu Server

https://phoenixnap.com/kb/how-to-install-a-gui-on-ubuntu

How To Sudo without password

Scenario: You just installed your Lunux Server and you are the only person using the server and you want to sudo without having to type your password all the time. This How To will show you one way of accomplishing that task.

This How To assumes you are a member of the sudo group.

  1. Check to see if you are a member of the sudo group:
    $ id
    You should see a list of all the groups you are a member of.
  2. Edit the /etc/sudoers file:
    $ sudo visudo
    This will open the the /etc/sudoers file with the default editor.
  3. There will be a line that looks like this:
    %sudo ALL=(ALL:ALL) ALL
  4. Comment out that line and replace it with a line that looks like this:
    %sudo ALL=(ALL) NOPASSWD: ALL
  5. Save the file.

You should now be able to sudo without being prompted for your password every time.

Linux Mint 21.x

These are my notes on configuring Linux Mint 21.x.

If you find this and think it is useful, leave a comment and say what you like or don’t like. Keep in mind these are my own notes and are not intended to be a HowTo for the general public.

This installation was done on an Dell Optiplex 7050. I’m also installing on Oracle Virtual Box so I will add some additional steps for that which will be noted as extra steps for Virtual Box.

Disable Secure Boot

I configured the Dell BIOS to have Secure Boot Disabled. It is possible to install this and have Secure Boot Enabled but for my purposes this is simply a hassle that I don’t need and the benefits are negligible for a home computer.

Install Linux Mint 21.x.

As of this writing it is Mint 21.1. I may update these instructions as newer versions come out. Without going into lots of detail on how to install Linux Mint which has been covered in many other HowTos I am just focusing on what I do to configure it to my liking. I am installing on a fresh new disk. I did install multimedia codecs. If you have turned off Secure Boot as mentioned earlier you will not have any additional prompts in this area.

I did select Advanced Features in the Installation Type window and selected to use LVM with the new installation. I did choose to erase disk because this is a new disk and a fresh install. I did choose to encrypt my home directory. Maybe not? Testing without encrypting.

The installation is pretty straight forward and not complicated.

Up and Running

Virtual Box Guest Additions

For Virtual Box Virtual Machine you will need to install Guest Additions

  1. Click Devices
  2. Insert Guest Additions CD image
  3. Click ‘Run’
  4. Type your password

This will install guest additions and allow you to resize your screen on the fly.

First Steps

When you first run Mint you will get a Welcome Screen. On the left click First Steps.

Panel Layout. I like Traditional Panel Layout.

Launch the Update Manager and update everything. You may need to reboot at this point.

Launch Driver Manager and see if you need any drivers. I did not need any.

I’ll talk about System Snapshots a little later.

I will address Firewall a little later as well.

The other items on First Steps are pretty much self explanatory.

Firmware

I get a message when I did the updates that the firmware was outdated. I was able to resolve the issue by doing the following:

$ sudo apt install fwupd
$ fwupdmgr get-updates
$ fwupdmgr update

Then follow the prompts to update. The system will reboot and do the updates then reboot again.

Synergy

I’m putting Synergy first. For me it makes it easier to set up my new machine alongside my old one and use the single keyboard and mouse. That way I don’t have to switch back and forth on the keyboard.

Linux Mint 21 is based on Ubuntu 22.04 LTS. See: https://en.wikipedia.org/wiki/Linux_Mint

Go to https://symless.com/account and sign in. Go to the download page and get the package for Synergy 1. Synergy 2 is no longer supported and is not backwards compatible. Synergy 3 is in beta if interested. Download the Ubuntu 22 package and save it to ~/Downloads.

Install it on both the Server and Client computer. Make sure the same version is on both computers.:

$ cd ~/Downloads
$ sudo apt install ./synergy_1.14.6-snapshot.88fdd263_ubuntu22_amd64.deb

Now from the desktop menu select Synergy and run it.

  • You will be prompted to name the computer. If your computer already has a name then it will suggest the name for you. Click ‘Apply’.
  • You will be prompted to enter your serial key. This can be found on the Account page on the Synergy web site.
  • You will be prompted to select to either ‘Use this computer’s keyboard and mouse…’ or ‘Use another computer’s keyboard and mouse…’. In this case I am using another computer’s keyboard and mouse. Select the appropriate response.
  • Type in the IP address of the Server. Click ‘Connect’
  • You will get a ‘Security Question’ about the Server’s fingerprint. Read that and click ‘Yes’.
  • On the Server side you need to click the ‘Configure Server’ button to configure the layout.
  • If you run into trouble you should go into preferences and un-check ‘Enable TLS encryption’ on both Server and Client and get it working without TLS. Then once it is working switch to TLS.
  • From the new computer’s startup menu find ‘Startup Application’ and add Synergy to startup list. I’ve added a startup delay of about 30 seconds.
  • Once you have everything working correctly you should go to Preferences in both Server and Client and click both ‘Hide on startup’ and ‘Minimize to system tray’. Now you can minimize and not have it open in your task bar.

Sudoers

Edit the /etc/sudoers file so you don’t have to put your password in each time:

$ sudo visudo

There will be a line that looks like this:

%sudo ALL=(ALL:ALL) ALL

Comment out that line and make it look like this:

%sudo ALL=(ALL) NOPASSWD: ALL

Now when you use sudo you will not have to enter your password.

Install OpenSSH Server

Install SSH Server so you can ssh to the host:

$ sudo apt install openssh-server -y

Test ssh to the new host. You may during this process encounter an error regarding an “Offending ECDSA key in ~/.ssh/known_hosts”. This is easily resolved by deleting the referenced line in ~/.ssh/known_hosts.

I’ve also experienced an issue where when attempting to ssh to this new host via name it does not work. SSH via IP address does work. DNS resolution is correct. I even have the host in /etc/hosts. No dice.

I was finally able to resolve the issue by putting an entry into the ssh config file on my SSH from host in the ~/.ssh/config.d/LocalHosts.conf file. The entry in this file looks like this:

Host pop
Hostname 192.168.20.34
ForwardX11 yes
ForwardX11Trusted yes

This seems to have solved the problem. I suspect I have some other conflicting entry in my ssh config files that are preventing this, but I can’t find it.

SSH Keys:

Now that you can ssh to your new host you will want to be able to ssh using your ssh key instead of password. From the remote host do this:

$ ssh-copy-id -i ~/.ssh/id_rsa.pub [newhostname]

You will be prompted to enter the password for the New Host. It will copy over your public ssh key from ~/.ssh/id_rsa.pub. This assumes your public ssh key is indeed ~/.ssh/id_rsa.pub.

You should be able to ssh to the new host now without entering your password.

(Optional) Now copy all the ~/.ssh directory contents from your old host into this new host so you have the keys, the known hosts and authorized keys files from your user on the old host and now have them on your new host.

From the old host:

$ cd ~/.ssh
$ scp -r * [new-host-name]:~/.ssh

Hosts file:

Copy the Home Network section of your /etc/hosts file from the old host to the /etc/hosts file on the new host

Dropbox

Install Dropbox and python3-gpg packages

$ sudo apt install dropbox python3-gpg

Then go to start menu and find Dropbox and run it.

You will get a message that says in order to use Dropbox you must download the proprietary daemon. Click OK

A Web Page will pop up where you enter your credentials. Do so. You can now open the DropBox client in the toolbar.

Install KeepassXC

Keepass XC is the greatest Password Safe in my humble opinion.

Install it:

$ sudo apt install keepassxc -y

Install Chrome

You’ll need Chrome as well

Go to https://www.google.com/chrome/

Click the Download Chrome button. Mine automatically downloaded into ~/Downloads. The 64 bit version was automatically selected.

Install it like this:

$ cd ~/Downloads
$ sudo apt install ./google-chrome-stable_current_amd64.deb

This will automatically install a repository as well for future updates.

Install Signal

Go to https://signal.org/en/download/
Click on Download for Linux and follow the instructions that pop up.

After you install Signal edit the startup line in /usr/share/applications/signal-desktop.desktop to look like this:

Exec=/opt/Signal/signal-desktop --use-tray-icon --no-sandbox %U

Additional Software

There are other software packages I need. I’ll do them one at a time because I don’t want to confuse error message between one package or another:

$ sudo apt install kwrite -y
$ sudo apt install kate -y
$ sudo apt install terminator -y
$ sudo apt install sshuttle -y
$ sudo apt install vim -y
$ sudo apt install sshpass -y
$ sudo apt install nfs-common -y
$ sudo apt install gparted -y
$ sudo apt install imagemagick -y
$ sudo apt install whois -y
$ sudo apt install lsscsi -y

Mount NFS Share

Create a mount point:

$ cd ~
$ mkdir -p mnt/[nfs-server-host-name]

Edit /etc/fstab and add these lines:

# External Mounts
[nfs-server-host-name]:[path-to-nfs-export] /home/[your-user]/mnt/[nfs-server-host-name] nfs rw,soft,noauto 0 0

Edit /etc/hosts and add the IP address of Serenity.

Then mount the NFS share:

$ sudo mount [nfs-server-host-name]:[path-to-nfs-export]

You will need to modify the firewall rule on the NFS server to allow connections from your new host before this will work.
https://squoggle.wordpress.com/2020/05/04/iptables/

Mount External Hard Drive

See what device your External USB device shows up as:

$ lsscsi
[0:0:0:0] disk ATA Samsung SSD 860 4B6Q /dev/sda
[1:0:0:0] cd/dvd HL-DT-ST DVD+-RW GU90N A1C2 /dev/sr0
[4:0:0:0] disk WD Elements 25A1 1018 /dev/sdb

In my case it shows up as /dev/sdb
Edit your /etc/fstab file and make an entry like this:

# Western Digital Elements Backup Drive
/dev/sdb1    /home/mac/mnt/WD    ntfs    rw,relatime,user_id=0,group_id=0,allow_other   0 0

Create a mount point for the External Hard Drive

$ mkdir -p ~/mnt/WD

Then mount

$ sudo mount -a

Something else here.

Install Slack:

Go to https://slack.com/downloads/linux
Download the .deb 64 bit package into your ~/Downloads directory.
Then install it:

$ cd ~/Downloads
$ sudo apt install ./slack-desktop-4.29.149-amd64.deb

Crossover

Get the most recent version of Crossover here:
https://www.codeweavers.com/crossover

Get the free trial and download to your machine.

Then install like this:

$ sudo apt install ./crossover_[version-number].deb

Before you attempt to run any bottle you will need to install this library:

$ sudo apt-get install liblcms2-2:i386

This will install a bunch of other dependencies as well.

To export a bottle from one machine to another, in this case Quicken, which is the only reason for running Crossover, do this:

  1. Open Crossover
  2. Right Click on the Quicken Bottle.
  3. Choose ‘Export Quicken 2017 to Archive’
  4. Choose a location to save it. It is a good idea to time stamp the file to not overwrite a previous working bottle export.
  5. On the new machine go to Menu > Bottle > Import Bottle Archive
  6. Browse to where you stored the archive, click it and click ‘Restore’.
  7. I get a message that CrossOver needs to install several Linux packages in order to run Windows applications. Click Yes. This will install a butt load of libraries and dependencies.
  8. You may actually think it is stuck but when it seems to stop doing something see if the ‘Continue’ button is active and if so, click it.
  9. The process will sit there for a bit acting like it is stuck. Just be patient.
  10. Finally your bottle should be imported.
  11. Make your symlinks to your datafiles to your home directory because Crossover has issues with finding files that are deep.
  12. Crossover only needs your email address and login password to register. There is no serial number.

Surprisingly this was the first time importing a bottle worked flawlessly. This is a new version on new machine so maybe they worked the kinks out of it.

VueScan

Get the latest version here:

https://www.hamrick.com/alternate-versions.html

Profile

Modify your profile.

Edit ~/.bashrc and change

alias ll='ls -alF'

to

alias ll='ls -lF'

Set your $PATH to include ~/bin

# Set your path to inclue $HOME/bin
PATH="$HOME/bin:$PATH"

Save the file and then source it like this:

$ source ~/.bashrc

Additional Packages

Here’s a way you can see what packages you have on your old machine and compare to what you have on your new machine.

On the old machine do:

$ sudo apt list --installed | cut -f1 -d/ | sort > installed.[old-hostname]

Then on the new machine do:

$ sudo apt list --installed | cut -f1 -d/ | sort > installed.[new-hostname]

Then SCP the installed.[new-hostname] file to the old host and then compare them like this:

$ diff installed.gob installed.pop | grep ‘<‘

This will give you a list of packages that are installed on the old host but not on the new host. It turns out I had quite a few. Go thru the list and see what you need on the new.

The majority of the packages you find will probably be dependencies for some other package you installed. If you don’t know what a package is for you can easily check information about it with:

$ apt show [package-name]

The majority of the packages I found this way are libraries that are dependencies for other packages I have installed over time.

I found a few packages that I think are useful and should probably be installed:

alien
gimp
gparted
git
mlocate
nmap
traceroute

This is a short list of many.

Other Must See Pages

At this point you should be up and running and ready to work. However there are a lot more things that I typically use on a day to day basis when using Linux Mint.

This list is not an extensive list but may be of help:

Install VirtualBox 7.0 on Linux Mint 21.x

Key Store Explorer

Installing ZenMap in UBUNTU 22.04

How to Install Zenmap on Ubuntu 22.04

How to install Proton VPN on Linux Mint

How to use the Proton VPN Linux app

Install JetBrains Toolbox App Then use the Toolbox to install PyCharm and DataGrip


Online Certificate Status Protocol (OCSP)

Online Certificate Status Protocol (OCSP) is an alternative method to Certificate Revocation Lists (CRLs) used to check the validity of digital certificates in a public key infrastructure (PKI).

When a user encounters a digital certificate, their software can use OCSP to send a request to the certificate authority (CA) to check the current status of the certificate. The CA responds to the request with one of three responses: “good”, “revoked”, or “unknown”.

If the response is “good”, the user’s software can proceed with the transaction or access to the resource protected by the certificate. If the response is “revoked”, the software rejects the certificate as invalid. If the response is “unknown”, the software may require additional steps to verify the validity of the certificate.

Unlike CRLs, which can become large and unwieldy as the number of revoked certificates increases, OCSP allows for more efficient and timely checking of individual certificates. However, it requires a constant connection to the CA to receive real-time status updates and can be subject to performance and privacy concerns.

The Good about OCSP

  • Real-time validation: OCSP provides real-time validation of certificates, so users can immediately determine whether a certificate is valid or not.
  • Smaller and more efficient: OCSP responses are typically smaller and more efficient than certificate revocation lists (CRLs), especially for large PKIs with many revoked certificates.
  • Reduced latency: OCSP can reduce latency by eliminating the need for users to download and parse large CRL files.
  • More privacy-friendly: OCSP can be more privacy-friendly than CRLs, as it doesn’t require users to download a complete list of revoked certificates and associated information.

The Bad about OCSP

  • Increased network traffic: OCSP requires users to contact the certificate authority (CA) server each time a certificate is validated, which can increase network traffic and cause performance issues.
  • Single point of failure: OCSP relies on a single CA server for validation, so if the server goes down or experiences issues, users may be unable to validate certificates.
  • Reduced reliability: OCSP may be less reliable than CRLs in certain situations, such as when there are issues with the CA’s OCSP server or network connectivity.
  • Potential privacy concerns: While OCSP can be more privacy-friendly than CRLs, it still allows the CA to track which certificates are being validated and when, which may be a concern for some users.

Check the OCSP status of a Certificate

You can check an Online Certificate Status Protocol (OCSP) response with OpenSSL using the openssl ocsp command. Here is an example command:

openssl ocsp -issuer issuer_cert.pem -cert certificate.pem -url http://ocsp.server.com -text

This command checks the status of the certificate in certificate.pem by sending an OCSP request to the server at http://ocsp.server.com. The issuer_cert.pem file is the certificate of the issuer that signed the certificate.pem file. The -text option displays the response in human-readable text.

After running the command, you will receive an OCSP response that includes the status of the certificate. If the status is “good”, the certificate is valid. If the status is “revoked”, the certificate has been revoked by the issuer. If the status is “unknown”, the server was unable to provide a definitive response for the certificate.

Get the Certificate from a Site:

Lets use google.com as an example.

Get the Certificate for google.com and save it to a file named certificate.pem:

openssl s_client -connect google.com:443 -showcerts /dev/null | sed -n '/Certificate/,/-----END CERTIFICATE-----/p' | tail -n +3 > certificate.pem

Get the Issuing Cert from a Site:

Get the issuing certificate for google.com and save it to a file named issuer.pem:

openssl s_client -connect google.com:443 -showcerts /dev/null | sed -n '/1 s:/,/-----END CERTIFICATE-----/p' | tail -n +3 > issuer.pem

Extract the OCSP URL from the Certificate:

Use OpenSSL to get the OCSP URL from the Certificate and save it to a variable name ocspurl:

ocspurl=$(openssl x509 -in certificate.pem -noout -text | grep "OCSP" | cut -f2,3 -d:)

Test the OCSP Status of the Certificate:

Check the status of the OCSP status of the certificate using the ocsp flag to OpenSSL like this:

openssl ocsp -issuer issuer.pem -cert certificate.pem -url $ocspurl -text

You should get a response that looks something like this:

OCSP Request Data:
    Version: 1 (0x0)
    Requestor List:
        Certificate ID:
          Hash Algorithm: sha1
          Issuer Name Hash: 12D78B402C356206FA827F8ED8922411B4ACF504
          Issuer Key Hash: A5CE37EAEBB0750E946788B445FAD9241087961F
          Serial Number: 0CD04791FC985ABB27E20A42A232FDF5
    Request Extensions:
        OCSP Nonce: 
            0410CD24FED402FF2B1D2331485C81AD1C21
OCSP Response Data:
    OCSP Response Status: successful (0x0)
    Response Type: Basic OCSP Response
    Version: 1 (0x0)
    Responder Id: A5CE37EAEBB0750E946788B445FAD9241087961F
    Produced At: Apr 26 00:54:27 2023 GMT
    Responses:
    Certificate ID:
      Hash Algorithm: sha1
      Issuer Name Hash: 12D78B402C356206FA827F8ED8922411B4ACF504
      Issuer Key Hash: A5CE37EAEBB0750E946788B445FAD9241087961F
      Serial Number: 0CD04791FC985ABB27E20A42A232FDF5
    Cert Status: good
    This Update: Apr 26 00:39:01 2023 GMT
    Next Update: May  2 23:54:01 2023 GMT

    Signature Algorithm: ecdsa-with-SHA256
         30:45:02:20:45:c2:eb:e2:54:23:2a:c5:49:47:c2:f0:0b:cf:
         8d:06:6d:17:62:26:2e:4a:ba:8e:cd:61:bf:dd:af:e8:ea:cb:
         02:21:00:94:bd:5c:33:e7:ac:20:50:d4:15:45:9e:d8:8d:75:
         1a:fb:c5:95:5f:11:c7:b2:88:47:0a:5b:56:d0:3c:89:b5
WARNING: no nonce in response
Response verify OK
certificate.pem: good
	This Update: Apr 26 00:39:01 2023 GMT
	Next Update: May  2 23:54:01 2023 GMT

OpenSSL OCSP Commands Documentation

Online Certificate Status Protocol command

https://www.openssl.org/docs/man3.0/man1/openssl-ocsp.html

Certificate Revocation List (CRL)

Certificate Revocation Lists (CRLs) are used in public key infrastructure (PKI) to identify digital certificates that have been revoked by the certificate authority (CA) before their expiration date.

When a CA revokes a digital certificate, it adds the certificate’s serial number to the CRL. The CRL is then distributed to users who rely on the PKI, such as web browsers and other software that verify digital certificates.

When a user encounters a digital certificate that has been revoked, their software checks the CRL to confirm that the certificate is no longer valid. If the certificate’s serial number is listed on the CRL, the software will reject the certificate and prevent the user from accessing the website or resource protected by the certificate.

CRL Expiration

The client typically gets a new Certificate Revocation List (CRL) from the Certificate Authority (CA) when the existing CRL expires or when there have been changes to the status of certificates that have been revoked.

When a CA revokes a digital certificate, it adds the certificate’s serial number to the CRL. The CRL contains a list of all the revoked certificates, along with their revocation status and the reason for revocation.

The CRL has an expiration date and time, after which it is no longer considered valid. The expiration date is typically set by the CA when the CRL is issued, and it is usually a few days to a few weeks after the issue date. When the CRL is about to expire, the client will check with the CA to obtain a new CRL that is valid for the next period.

In addition to the expiration date, the client may also obtain a new CRL if there are changes to the revocation status of certificates that have been previously listed in the CRL. This can happen if a certificate that was previously revoked is now reinstated, or if a certificate that was previously valid is now revoked.

The client can obtain a new CRL from the CA via various means, such as through online updates or downloads. Some PKIs also use alternative methods of certificate revocation, such as Online Certificate Status Protocol (OCSP), which can provide real-time updates on the status of certificates.

The Good about CRL

  • Offline validation: CRLs can be downloaded and stored offline, allowing users to validate certificates even when they are not connected to the network.
  • No single point of failure: Unlike OCSP, CRLs don’t rely on a single server for validation, so they are less susceptible to single points of failure.
  • Better reliability: CRLs may be more reliable than OCSP in certain situations, such as when the CA’s OCSP server or network connectivity is experiencing issues.
  • Can cover multiple certificates: A single CRL can cover multiple certificates, reducing the amount of data that needs to be downloaded and parsed.

The Bad about CRL

  • Larger size: CRLs can become large and unwieldy as the number of revoked certificates increases, leading to longer download times and increased storage requirements.
  • Increased latency: CRLs can introduce latency into the certificate validation process, as users must download and parse the entire CRL before they can validate a certificate.
  • May be outdated: CRLs are typically updated on a periodic basis, so there is a risk that a certificate may have been revoked between updates and the user may not be aware of it.
  • May present a privacy risk: CRLs can potentially expose information about revoked certificates, which could be used by attackers to gather information about a PKI.

Overall, CRLs can be an effective means of validating certificates in a PKI, especially in situations where offline validation is important or when the number of revoked certificates is relatively small. However, they also have some drawbacks that should be considered, such as larger size, increased latency, and potential privacy risks.

Delta CRL

A Delta Certificate Revocation List (CRL) is a type of CRL that contains only the revoked certificates that have been added or changed since the previous CRL was issued. The Delta CRL is meant to be used in conjunction with the base CRL, which contains the complete list of revoked certificates.

The Delta CRL is a more efficient way of distributing certificate revocation information, as it contains only the changes to the previous CRL, rather than the entire list of revoked certificates. This can significantly reduce the size of the CRL and the time it takes to download and process it.

To use a Delta CRL, the client first downloads the base CRL, which contains the complete list of revoked certificates. The client then downloads the Delta CRL, which contains only the changes since the previous CRL. The client then merges the Delta CRL with the base CRL to obtain a complete and up-to-date list of revoked certificates.

The use of Delta CRLs can help to improve the efficiency of certificate revocation in large PKIs, especially when the number of revoked certificates is high and changes occur frequently. However, the use of Delta CRLs also requires additional management and coordination between the CA and the client, as both parties must ensure that the Delta CRL is properly applied and merged with the base CRL.

Troubleshooting CRL

Sometimes you may need to troubleshoot certificate issues by examining a CRL (Certificate Revocation List)

Download a CRL

These instructions show how you can easily download a CRL from a website. I’ll use https://duckduckgo.com/ in this example.

  1. Open Google Chrome. Navigate to https://duckduckgo.com/. Notice the padlock in the address bar.
  2. Right click on the padlock in the address bar. Click Connection is secure to see the connection details.
  3. Click Certificate is valid to open the certificate details box. Click the Details tab.
  4. In the Certificate Fields box, scroll down and click on CRL Distribution Points. In the Field Value box you will see any URLs associated with the CRL for the Certificate Authority or the Signing Certificate.
  5. Copy and paste the URL into a new window of the browser. You will be prompted to save the file. In my case I downloaded a file named DigiCertTLSRSASHA2562020CA1-4.crl.

Parse the CRL

  1. Open a terminal in the directory where you saved the CRL.
  2. Check to see if the CRL is in DER format or PEM format. Most CRLs are in DER format. If you do a simple head command on the CRL file you will see if it is a DER (binary) file or a PEM file. If it is binary you will see gibberish. If it is a PEM formatted file you will see ,“BEGIN X509 CRL—–”.
  3. Parse the CRL. If the CRL is in DER format use this syntax:
    openssl crl -inform DER -text -noout -in [crl-file] | less
    If the CRL is in PEM format use this syntax:
    openssl crl -inform PEM -text -noout -in [crl-file] | less
  4. You will see a list of all the revoked certificates that were issued by the Issuing Certificate.

OpenSSL CRL Commands Documentation

The OpenSSL CRL commands official documentation:

https://www.openssl.org/docs/man3.0/man1/openssl-crl.html

TLS 1.2 vs. TLS 1.3: Exploring the Key Differences and Advancements in Security

Introduction

Transport Layer Security (TLS) is a widely-used cryptographic protocol that provides secure communications over a computer network, such as the Internet. TLS ensures that the data transmitted between a client and a server is encrypted and protected from eavesdropping and tampering. In this blog post, we will discuss the key differences between TLS 1.2 and TLS 1.3, the latest version of the protocol, and explore how TLS 1.3 offers improved security, performance, and privacy compared to its predecessor.

Faster and More Efficient Handshake Process

One of the most significant improvements in TLS 1.3 is the streamlined and efficient handshake process. In most cases, TLS 1.3 reduces the number of round trips between the client and server to just one, speeding up the connection establishment. This improvement is particularly beneficial for latency-sensitive applications like web browsing, providing a more responsive user experience.

Modern and Secure Cryptographic Algorithms

TLS 1.3 supports only modern and secure cryptographic algorithms, removing outdated and vulnerable ciphers that were still allowed in TLS 1.2. By eliminating weak ciphers and focusing on strong encryption techniques, TLS 1.3 offers better resistance to attacks and cryptographic weaknesses. For example, TLS 1.3 no longer supports the RSA key exchange, which is vulnerable to several attacks.

Mandatory Forward Secrecy

Forward secrecy is a security feature that ensures that even if a server’s private key is compromised, past communication sessions cannot be decrypted. While forward secrecy was optional in TLS 1.2, it is mandatory in TLS 1.3. This is achieved by using ephemeral (short-lived) keys for each session, which are discarded after use, further enhancing the security of the protocol.

Simplified Protocol Design

TLS 1.3 boasts a simpler and cleaner design compared to TLS 1.2, as it has removed many features and options that were either outdated or considered insecure. This streamlined design makes the protocol easier to implement, understand, and analyze, reducing the likelihood of implementation errors and security vulnerabilities.

Zero Round-Trip Time (0-RTT) Resumption

A new feature introduced in TLS 1.3 is the 0-RTT resumption, which allows clients to send encrypted data to a server during the initial handshake, without waiting for the handshake to complete. This can significantly improve performance in certain scenarios, such as when a client is reconnecting to a previously-visited server. However, this feature can also introduce some security risks, and its use should be carefully evaluated.

Conclusion

TLS 1.3 offers several advantages over TLS 1.2, including improved security, performance, and privacy. Its adoption has been growing steadily, and it is now the recommended version for securing communications over the Internet. However, it is important to note that while TLS 1.3 is superior, TLS 1.2 is still considered secure when properly configured with modern ciphers and settings. By understanding the key differences between these two versions, organizations can make informed decisions about their security infrastructure and ensure the highest level of protection for their users.