Squoggle

Mac's tech blog

Category Archives: Linux

Linux Mint 22 – Custom UEFI LVM Installation

Linux Mint’s Cinnamon desktop is one of the most productive and visually clean environments. However, its installation process has limitations, such as minimal support for Logical Volumes (LVM) on multiple devices. This guide provides a detailed walkthrough for configuring Linux Mint 22 with UEFI and LVM across multiple storage devices, allowing for advanced customization.

This guide documents the steps I followed to install Linux Mint 22 using Logical Volumes across multiple physical storage devices. By sharing this process, I aim to help others replicate this setup, whether on virtual machines or physical hardware

My Setup

For this installation I am doing the installation and configuration on a VirtualBox Virtual machine to test the process and in theory duplicate the process on a physical host. The salient details of the hardware are follows:

HardwareVirtual MachinePhysical Machine
Memory32 GB64 GB
Processors48
Disk via SATA Controller100 GB1 TB
Disk via NVME Controller10 GB2 TB

Partitioning

One of the reasons I’m doing this custom installation is that the automated installation does not allow me to customize the partitioning. One of the things I want to accomplish is customizing the amount of swap space so that I can configure Hibernation on this host. In order to do so I need more swap space defined than I have configured in memory. The below table is an outline of the physical devices and partitions I want to create:

Partition Setup Overview

/dev/sda:

  1. /dev/sda1: EFI partition (1 GB, FAT32).
    • The EFI partition only needs to hold the bootloader and essential EFI files. A 1 GB size provides ample space for future updates or additional boot entries, without wasting disk space.
  2. /dev/sda2: 4GB ext4 filesystem mounted at /boot
    • The /boot partition houses the Linux kernel, initramfs, and other boot-related files. A 4 GB allocation ensures sufficient space for kernel updates, multiple kernels, and recovery options, especially useful if the system uses custom kernels.
  3. /dev/sda3: Physical Volume (PV) for LVM for swap space (48 GB) and for root file system (remaining space).
    • Allocating the remaining space to an LVM PV provides flexibility. It allows dynamic resizing of logical volumes for swap and the root filesystem as needs evolve. By defining a large swap space (48 GB), the system supports hibernation, which requires swap space to be at least equal to the amount of installed memory. The rest is allocated to the root filesystem to hold the operating system and application files.

/dev/nvme0n1:

  1. Physical Volume (PV) for LVM for /home (entire device).
    • Using the entire NVMe device for the /home logical volume ensures fast access to user data and provides a clear separation from the root filesystem. This approach enhances performance and simplifies backups or future migrations of user data.

Steps to Configure /dev/sda

1. Open GParted

  1. Boot into the Linux Mint live session.
  2. Launch GParted from the menu.
  3. Select /dev/sda from the dropdown at the top-right.

2. Create GPT Partition Table on /dev/sda

  1. Go to Device > Create Partition Table.
  2. Select GPT as the new partition table type and click Apply.

3. Create EFI Partition

  1. Right-click on the unallocated space and select New.
  2. Configure:
    • Size: 1 GB.
    • File System: fat32.
    • Label: EFI.
  3. Click Add.
  4. Apply all changes:
    • Click the checkmark (green tick). Click Apply.
  5. Set flags:
    • Right-click the partition > Manage Flags.
    • Enable boot and esp flags.

4. Create Boot Partition

  1. Right-click on the unallocated space and select New.
  2. Configure:
    • Size: 4 GB.
    • File System: ext4.
    • Label: Boot.
  3. Click Add.

5. Create LVM Physical Volume

This LVM Physical Volume will contain both the SWAP space and the Root (“/”) file system.

  1. LVM/PV Partition:
    • Right-click the remaining unallocated space and select New.
    • Configure:
      • Size: Remaining Space.
      • File System: lvm2 pv
      • Label: SDA PV.
    • Click Add.
  2. Apply all changes:
    • Click the checkmark (green tick). Click Apply.

6. Check your work

You can check your work with the following terminal commands:

sudo pvdisplay
sudo fdisk -l /dev/sda

You should see indications of what you have created with Gparted.

Steps to Configure /dev/nvme0n1

Since the entire nvme device is going to be configured as LVM, nothing needs to be done other than what is done below in the terminal.

Setup LVM Using the Terminal

1. Prepare Physical Volumes

Run these commands in the terminal:

sudo pvcreate /dev/nvme0n1 # Physical Volume for /home

Verify the Physical Volumes have been created:

sudo pvdisplay

2. Create Volume Groups

sudo vgcreate vg_sda /dev/sda3
sudo vgcreate vg_nvme0 /dev/nvme0n1

Verify the Volume Groups:

sudo vgdisplay

3. Create Logical Volumes

  1. Swap:
    sudo lvcreate -L 96G -n lv_swap vg_sda
  2. Root:
    sudo lvcreate -l 100%FREE -n lv_root vg_sda
  3. Home:
    sudo lvcreate -l 100%FREE -n lv_home vg_nvme0

Verify the Logical Volumes:

sudo lvdisplay

4. Format Logical Volumes

I will be formatting the lv_root and lv_home Logical Volumes during the next section. It is not required to do it here.

Enable the swap Logical Volume:
sudo mkswap /dev/vg_sda/lv_swap

Verify everything with this:

lsblk -f

You should see a representation of how you have partitioned your devices

5. Mount Points in Installer

I’m using ext4 for compatibility and performance.

  1. During installation, choose Something Else.
  2. Assign:
    • /dev/mapper/vg_nvme0-lv_home: /home and format to ext4
    • /dev/mapper/vg_sda-lv_root: / and format to ext4
    • /dev/mapper/vg_sda-lv_swap: swap space
    • /dev/sda1: EFI System Partition automatically formatted to (FAT32)
    • /dev/sda2: /boot and format to ext4
  3. Set Device for boot loader installation to /dev/sda. /dev/sda is used for boot loader installation because it houses the EFI System Partition, ensuring compatibility, reliability, and ease of management in a UEFI-based system.

6. Install Linux Mint

  1. You should now be ready to install. Click the Install Now button.
  2. You will see a summary of what the installer is going to do. If you are satisfied, click the Continue button.

Tailscale

Build your own VPN.

These instructions cover how to install Tailscale on both Linux Mint 21.x and 22.x.

Linux Mint 21.x

Add Tailscale’s GPG key

$ curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/jammy.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null

Add the tailscale repository:

$ curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/jammy.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list

Install Tailscale:

$ sudo apt-get update && sudo apt-get install tailscale

Start Tailscale:

$ sudo tailscale up

Linux Mint 22.x

Add Tailscale’s GPG key

$ curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/noble.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null

Add the tailscale repository:

$ curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/noble.tailscale-keyring.list | sudo tee /etc/apt/sources.list.d/tailscale.list

Install Tailscale:

$ sudo apt-get update && sudo apt-get install tailscale

Start Tailscale:

$ sudo tailscale up

Logical Volume on Nvme disk

This blog post is how to configure a new Nvme SSD disk with Logical Volumes, format it and then mount it for use on your Linux system. The OS I’m doing this on is Linux Mint 22.0 but the steps are very similar on other Linux distros.

List Disks

The first step is to get information about the disks on your system. Do that with the lshw command:

$ sudo lshw -class disk
$ sudo lshw -class disk
  *-disk                    
       description: ATA Disk
       product: Samsung SSD 870
       physical id: 0
       bus info: scsi@0:0.0.0
       logical name: /dev/sda
       version: 3B6Q
       serial: S75BNL0X510488E
       size: 931GiB (1TB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=0a659757-d6ef-4549-a6fe-ad2ca7f79fb2 logicalsectorsize=512 sectorsize=512
  *-cdrom
       description: DVD writer
       product: DVD+-RW DU-8A5LH
       vendor: PLDS
       physical id: 1
       bus info: scsi@1:0.0.0
       logical name: /dev/cdrom
       logical name: /dev/sr0
       version: 6D1M
       capabilities: removable audio cd-r cd-rw dvd dvd-r
       configuration: ansiversion=5 status=nodisc
  *-namespace:0
       description: NVMe disk
       physical id: 0
       logical name: hwmon1
  *-namespace:1
       description: NVMe disk
       physical id: 2
       logical name: /dev/ng0n1
  *-namespace:2
       description: NVMe disk
       physical id: 1
       bus info: nvme@0:1
       logical name: /dev/nvme0n1
       size: 1863GiB (2TB)
       configuration: logicalsectorsize=512 sectorsize=512 wwid=eui.0025384541a0abf0

The device I’m interested in here is logical name: /dev/nvme0n1which is a 2 TB device.

Physical Volumes

Check status of Physical Volumes on the system:

$ sudo pvdisplay

I only see the boot device. This means no Physical Volume has been created yet on the ss

$ sudo pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name vgmint
PV Size 931.01 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238338
Free PE 0
Allocated PE 238338
PV UUID kGgo6r-HbVW-R0H8-Z4Ll-fGTO-fMej-O0iREI

Create the new Physical Volume on /dev/nvme0n1:

$ sudo pvcreate /dev/nvme0n1
$ sudo pvcreate /dev/nvme0n1
Physical volume "/dev/nvme0n1" successfully created.

Check and confirm:

$ sudo pvdisplay
$ sudo pvdisplay
--- Physical volume ---
PV Name /dev/sda2
VG Name vgmint
PV Size 931.01 GiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238338
Free PE 0
Allocated PE 238338
PV UUID kGgo6r-HbVW-R0H8-Z4Ll-fGTO-fMej-O0iREI

"/dev/nvme0n1" is a new physical volume of "<1.82 TiB"
--- NEW Physical volume ---
PV Name /dev/nvme0n1
VG Name
PV Size <1.82 TiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID lda15M-nH4L-CEok-QDat-2A1O-dPBI-VeTeEK

We now have the new Physical Volume on /dev/nvme0n1

Volume Group

Check the status of existing Volume Groups:

$ sudo vgdisplay
$ sudo vgdisplay
--- Volume group ---
VG Name vgmint
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <931.01 GiB
PE Size 4.00 MiB
Total PE 238338
Alloc PE / Size 238338 / <931.01 GiB
Free PE / Size 0 / 0
VG UUID 88rFXN-yf9R-epV4-msvR-5cRr-zl0v-qxFUEd

You see that there already exists a Volume Group named vgmint where the root filesystem is installed.

Create the new Volume Group named vgnvme on the newly created /dev/nvme0n1 physical volume like this:

$ sudo vgcreate vgnvme /dev/nvme0n1
$ sudo vgcreate vgnvme /dev/nvme0n1
Volume group "vgnvme" successfully created

Then confirm it was created correctly:

$ sudo vgdisplay vgnvme
$ sudo vgdisplay vgnvme
--- Volume group ---
VG Name vgnvme
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <1.82 TiB
PE Size 4.00 MiB
Total PE 476932
Alloc PE / Size 0 / 0
Free PE / Size 476932 / <1.82 TiB
VG UUID pH9fRB-QXRz-IGkL-t57V-xVXm-NP15-rf9vlA

Logical Volume

Now review the existing Logical Volumes on the system with the lvdisplay command:

$ sudo lvdisplay
$ sudo lvdisplay
--- Logical volume ---
LV Path /dev/vgmint/root
LV Name root
VG Name vgmint
LV UUID Oq70Uf-b9zI-zQE1-134v-Zi0t-M6ee-jGYgD1
LV Write Access read/write
LV Creation host, time mint, 2024-08-27 18:47:19 -0400
LV Status available
# open 1
LV Size <929.10 GiB
Current LE 237849
Segments 1
Allocation inherit
Read ahead sectors auto
currently set to 256
Block device 252:0 

--- Logical volume ---
LV Path /dev/vgmint/swap_1
LV Name swap_1
VG Name vgmint
LV UUID 82mxHk-oqhS-I1DF-HsRo-OaBs-S5la-lG70HE
LV Write Access read/write
LV Creation host, time mint, 2024-08-27 18:47:19 -0400
LV Status available open 2 LV Size 1.91 GiB
Current LE 489
Segments 1
Allocation inherit
Read ahead sectors auto
currently set to 256
Block device 252:1

You see that I have en existing Logical Volume named root for the root filesystem and another named swap_1 for the swap space. Both Logical Volumes reside on the vgmint Volume Group.

Now create a new Logical Volume named volnvme on the newly created vgnvme Volume Group. Create it using the maximum space allowed on the Volume Group:

$ sudo lvcreate -n volnvme -l 100%FREE vgnvme
$ sudo lvcreate -n volnvme -l 100%FREE vgnvme
Logical volume "volnvme" created.

Then confirm it was created correctly. Use volume_group/volume_name format when displaying:

$ sudo lvdisplay vgnvme/volnvme
$ sudo lvdisplay vgnvme/volnvme
--- Logical volume ---
LV Path /dev/vgnvme/volnvme
LV Name volnvme
VG Name vgnvme
LV UUID v2DC8Q-XQPk-aFUU-X8wR-XWCq-RCYx-i80TFB
LV Write Access read/write
LV Creation host, time Gob, 2024-09-01 16:43:52 -0400
LV Status available
# open 0
LV Size <1.82 TiB
Current LE 476932
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 252:2

This shows the correctly created Logical Volume that is associated with the Volume Group.

Create Filesystem

Before we can use it we need to create a filesystem on the Logical volume. I’m going to create a ext4 filesystem:

$ sudo mkfs.ext4 /dev/vgnvme/volnvme
$ sudo mkfs.ext4 /dev/vgnvme/volnvme
mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: done
Creating filesystem with 488378368 4k blocks and 122101760 inodes
Filesystem UUID: 74388657-077d-46ca-adb4-44e986ff6c47
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848

Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done

Mount the Logical Volume

Now that you have the file system created you are ready to mount it somewhere. I want to mount it within my own user’s home directory. I need to create a mount point for this:

$ mkdir -p ~/mnt/nvme

Notice I did not use sudo. I want this directory structure and all files own by my user.

Now test mount the new Logical volume on the location you created for it like this:

$ sudo mount /dev/vgnvme/volnvme /home/mac/mnt/nvme

If everything went as expected you should not get any response to the above command.

Verify that the Logical Volume is mounted:

$ mount | grep /dev/mapper/vgnvme-volnvme
$ mount | grep /dev/mapper/vgnvme-volnvme
/dev/mapper/vgnvme-volnvme on /home/mac/mnt/nvme type ext4 (rw,relatime)

If you check file and group ownership of the /home/mac/mnt/nvme directory at this point you will see that it is owned by root even though we created it without sudo. This is because we had to mount it as root and it took on root ownership. Now that is is mounted you can change this ownership:

$ cd ~/mnt
$ sudo chown -R mac:mac ~/mnt/nvme/

Now un-mount the filesystem:

$ sudo umount /home/mac/mnt/nvme

Automatically mount this filesystem when booting

I want this filesystem to always be automatically mounted when the system boots. This means there needs to be an entry in the /etc/fstab file that directs the system to mount it.

First you will need to see what the mapper for this filesystem is. You can find it by doing a listing in /dev/mapper:

$ ll /dev/mapper

I see the following:

$ ll /dev/mapper
total 0
crw------- 1 root root 10, 236 Aug 30 17:54 control
lrwxrwxrwx 1 root root 7 Aug 30 17:54 vgmint-root -> ../dm-0
lrwxrwxrwx 1 root root 7 Aug 30 17:54 vgmint-swap_1 -> ../dm-1
lrwxrwxrwx 1 root root 7 Sep 1 16:43 vgnvme-volnvme -> ../dm-2

It looks like my mapper device is /dev/mapper/vgnvme-volnvme

Now, edit the /etc/fstab file as root user and add a section that looks like this:

# Internal NVME Disk
/dev/mapper/vgnvme-volnvme /home/mac/mnt/nvme ext4 errors=remount-ro 0 1

Tell the systemctl daemon to reload:

$ sudo systemctl daemon-reload

Now tell the system to mount that filesystem:

$ sudo mount /dev/mapper/vgnvme-volnvme

Confirm the file system is mounted:

$ mount | grep /dev/mapper/vgnvme-volnvme

Once you have confirmed you can reboot the system and confirm that the new filesystem automatically mounts on a reboot.

Linux Mint 22.x

These are my notes on configuring Linux Mint 22.x.

Linux Mint 22.x is based on Ubuntu 24.04. Make sure to read the Ubuntu release notes.

If you find this and think it is useful, leave a comment and say what you like or don’t like. Keep in mind these are my own notes and are not intended to be a HowTo for the general public.

This installation was done on an Dell Optiplex 7050. I’m also installing on Oracle Virtual Box so I will add some additional steps for that which will be noted as extra steps for Virtual Box.

Disable Secure Boot

I attempted to install with Secure Boot enabled, but it seems that Linux Mint has an issue installing certain drivers with secure boot enabled. This is not really something I need so I am disabling Secure Boot in the bios to not be hassled with it.

Install Linux Mint 22.x.

As of this writing it is Mint 22.0. I typically avoid a .0 release, but I just got a new computer and this OS was just released so we’ll see how it goes. I may update these instructions as newer versions come out. Without going into lots of detail on how to install Linux Mint which has been covered in many other HowTos I am just focusing on what I do to configure it to my liking. I am installing on a fresh new disk. I did install multimedia codecs. If you have turned off Secure Boot as mentioned earlier you will not have any additional prompts in this area.

I did select Advanced Features in the Installation Type window and selected to use LVM with the new installation. I did choose to erase disk because this is a new disk and a fresh install. I did choose to encrypt my home directory. I did encrypt home directory.

The installation is pretty straight forward and not complicated.

Up and Running

Virtual Box Guest Additions

As I mentioned I am also installing Linux Mint 22 on a Virtual machine so for Virtual Box Virtual Machine you will need to install Guest Additions. Ignore this if you are doing on a physical machine.

  1. Click Devices
  2. Insert Guest Additions CD image
  3. Click ‘Run’
  4. Type your password

This will install guest additions and allow you to resize your screen on the fly.

First Steps

When you first Launch Linux Mint you will get a Welcome Screen. On the left click ‘First Steps’.

Desktop Colors: I kept the default
Update Manager: Launch the Update Manager and update everything.
Driver Manager: When I launch I get a message that no drivers are needed.
System Snapshots: I will address at a later time.
Firewall: Also addressed later.

Firmware

I want to make sure my firmware (bios and other firmware) are up to date. Do that by doing the following:

$ sudo apt install fwupd
$ fwupdmgr get-updates
$ fwupdmgr update

Then follow the prompts to update. The system will reboot and do the updates then reboot again.

Sudoers

Edit the /etc/sudoers file so you don’t have to put your password in each time:

$ sudo visudo

There will be a line that looks like this:

%sudo ALL=(ALL:ALL) ALL

Comment out that line and make it look like this:

%sudo ALL=(ALL) NOPASSWD: ALL

Now when you use sudo you will not have to enter your password.

Install OpenSSH Server

Install SSH Server so you can ssh to the host:

$ sudo apt install openssh-server -y

Test ssh to the new host. You may during this process encounter an error regarding an “Offending ECDSA key in ~/.ssh/known_hosts”. This is easily resolved by deleting the referenced line in ~/.ssh/known_hosts.

I’ve also experienced an issue where when attempting to ssh to this new host via name it does not work. SSH via IP address does work. DNS resolution is correct. I even have the host in /etc/hosts. No dice.

I was finally able to resolve the issue by putting an entry into the ssh config file on my SSH from host in the ~/.ssh/config.d/LocalHosts.conf file. The entry in this file looks like this:

Host pop
Hostname 192.168.20.34
ForwardX11 yes
ForwardX11Trusted yes

This seems to have solved the problem. I suspect I have some other conflicting entry in my ssh config files that are preventing this, but I can’t find it.

SSH Keys:

Now that you can ssh to your new host you will want to be able to ssh using your ssh key instead of password. From the remote host do this:

$ ssh-copy-id -i ~/.ssh/id_rsa.pub [newhostname]

You will be prompted to enter the password for the New Host. It will copy over your public ssh key from ~/.ssh/id_rsa.pub. This assumes your public ssh key is indeed ~/.ssh/id_rsa.pub.

You should be able to ssh to the new host now without entering your password.

(Optional) Now copy all the ~/.ssh directory contents from your old host into this new host so you have the keys, the known hosts and authorized keys files from your user on the old host and now have them on your new host.

From the old host:

$ cd ~/.ssh
$ scp -r * [new-host-name]:~/.ssh

Hosts file:

Copy the Home Network section of your /etc/hosts file from the old host to the /etc/hosts file on the new host.

pCloud

Instead of
Dropbox I’ve decided to try pCloud. It is half as much money and much easier to set up. Pretty much all you have to do is create an account on pCloud then download the software binary and run it. It will install and run every time you boot your computer. Put the binary in /usr/bin, then after you run it, check Startup Applications to make sure it is starting each time and from correct path. Test by rebooting to see if it starts automatically.

I got the basic account which gives me 500 GB of storage which is more than I need. So far this has worked very well for me and is much less problematic than DropBox. I’ve not tried on MacOS or Windows yet but usually Linux is where most of the problems come from.

Install KeepassXC

Keepass XC is the greatest Password Safe in my humble opinion.

Install it:

$ sudo apt install keepassxc -y

Install Chrome

You’ll need Chrome as well

Go to https://www.google.com/chrome/

Click the Download Chrome button. Mine automatically downloaded into ~/Downloads. The 64 bit version was automatically selected.

Install it like this:

$ cd ~/Downloads
$ sudo apt install ./google-chrome-stable_current_amd64.deb

This will automatically install a repository as well for future updates.

Install Signal

Create a temporary directory off of your home directory:

$ mkdir -p ~/tmp
$ cd ~/tmp

Install the Signal official public software signing key:

$ wget -O- https://updates.signal.org/desktop/apt/keys.asc | gpg --dearmor > signal-desktop-keyring.gpg
$ cat signal-desktop-keyring.gpg | sudo tee /usr/share/keyrings/signal-desktop-keyring.gpg > /dev/null

Add the Signal repository to your list of repositories:

$ echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/signal-desktop-keyring.gpg] https://updates.signal.org/desktop/apt xenial main' | sudo tee /etc/apt/sources.list.d/signal-xenial.list

Note that noble is the Ubuntu version that corresponds to Mint 22, but they don’t have a repo for noble so you need to use the xenial repo as shown above.

Update your package database and install Signal:

$ sudo apt update && sudo apt install signal-desktop

Now from the start menu, find signal and run it. You will be prompted to scan a QR code from your signal app on your phone. Go to three dots > Settings > Linked Devices and scan the QR Code.

Now edit the startup line in /usr/share/applications/signal-desktop.desktop to look like this:

Exec=/opt/Signal/signal-desktop --use-tray-icon --no-sandbox %U

This will keep Signal alive in your system tray when you close it.

You will also want to add signal to the automatic startup list. [Super Key] > Startup Applications. Click the ‘+’ sign and ‘Choose Application‘, find Signal, select it the click ‘Add Application’. You can also edit the config to have a start delay to give the PC some time to settle before starting it. I delayed it for 120 seconds.

Additional Software

There are other software packages I commonly use that need to be installed:

$ sudo apt install  kwrite kate terminator sshuttle vim sshpass nfs-common gparted imagemagick whois lsscsi -y

Mount NFS Share

Mount the NFS share of your old workstation or other server. Create a mount point:

$ mkdir -p /mnt/[nfs-host-name]
$ mkdir -p /mnt/nfs-pop

Make sure the NFS server is in your /etc/hosts file by name.

Test mount the remote NFS Server at your newly created mount point:

$ sudo mount [nfs-server]:/home/mac/mnt/nvme /mnt/[mount-point]
$ sudo mount pop:/home/mac/mnt/nvme /mnt/nfs_pop

Edit the /etc/fstab file to create an entry in it:

# External Mount
pop:/home/mac/mnt/nvme /mnt/nfs_pop nfs rw,soft,noauto 0 0

Now you can simply mount or un-mount the NFS server with the following commands:

$ sudo mount /mnt/nfs_pop
$ sudo umount /mnt/nfs_pop

There may be firewall rules in play that you will have to set or open.

Crossover

Get the most recent version of Crossover here:
https://www.codeweavers.com/crossover

Get the free trial and download to your machine.

Then install like this:

$ sudo apt install ./crossover_[version-number].deb

Before you attempt to run any bottle you will need to install this library:

$ sudo apt-get install liblcms2-2:i386

This will install a bunch of other dependencies as well.

Register the installation of CrossOver before you attempt to install anything.

To export a bottle from one machine to another, in this case Quicken, which is the only reason for running Crossover, do this:

  1. Open Crossover
  2. Right Click on the Quicken_Classic Bottle.
  3. Choose ‘Export Quicken_Classic to Archive’
  4. Choose a location to save it. It is a good idea to time stamp the file to not overwrite a previous working bottle export.
  5. On the new machine go to Menu > Bottle > Import Bottle Archive
  6. Browse to where you stored the archive, click it and click ‘Restore’.
  7. I get a message that CrossOver needs to install several Linux packages in order to run Windows applications. Click Yes. This will install a butt load of libraries and dependencies.
  8. You may actually think it is stuck but when it seems to stop doing something see if the ‘Continue’ button is active and if so, click it.
  9. The process will sit there for a bit acting like it is stuck. I let is sit for a few minutes then came back and x’ed out of where it was. Closed Crossover and started it again. It seems to have installed the bottle.
  10. Finally your bottle should be imported.
  11. Make your symlinks to your datafiles to your home directory because Crossover has issues with finding files that are deep.
  12. Crossover only needs your email address and login password to register. There is no serial number.

Surprisingly this was the first time importing a bottle worked flawlessly. This is a new version on new machine so maybe they worked the kinks out of it.

VueScan

Get the latest version here:
https://www.hamrick.com/alternate-versions.html

Install it and put you serial number and registration number in.

Profile

Modify your profile.

Edit ~/.bashrc and change

alias ll='ls -alF'

to

alias ll='ls -lF'

Set your $PATH to include ~/bin

# Set your path to inclue $HOME/bin
PATH="$HOME/bin:$PATH"

Save the file and then source it like this:

$ source ~/.bashrc

Additional Packages

Here’s a way you can see what packages you have on your old machine and compare to what you have on your new machine.

On the old machine do:

$ sudo apt list --installed | cut -f1 -d/ | sort > installed.[old-hostname]

Then on the new machine do:

$ sudo apt list --installed | cut -f1 -d/ | sort > installed.[new-hostname]

Then SCP the installed.[new-hostname] file to the old host and then compare them like this:

$ diff installed.gob installed.pop | grep ‘<‘

This will give you a list of packages that are installed on the old host but not on the new host. It turns out I had quite a few. Go thru the list and see what you need on the new.

The majority of the packages you find will probably be dependencies for some other package you installed. If you don’t know what a package is for you can easily check information about it with:

$ apt show [package-name]

The majority of the packages I found this way are libraries that are dependencies for other packages I have installed over time.

I found a few packages that I think are useful and should probably be installed:

$ sudo apt install gimp git nmap nmap-common traceroute ethtool ffmpeg guake steam sysstat

Install Spotify

Want to play your Spotify play lists? Install Spotify from the Software Manager. Just search for it and install it.

You should now be able to log into Spotify and play your music.

Mount Additional Drives

See this post Logical Volume on Nvme disk

Install Virtual Box

See this post to install Install VirtualBox 7.0 on Linux Mint 21.x or Linux Mint 22.x

How To Configure Linux Raid

Scenario: You have a Linux Server that has a boot drive and two additional hard drives you want to configure in a RAID array as data drives. This How To will demonstrate how to create a RAID 1 array using two physical drives in a server that is already up and running.

RAID 1 is disk mirroring. I want to mirror one disk to the other. In case one physical drive fails I can drop the bad drive from the RAID array, install a new drive and then add the new drive to the Array and rebuild the new disk to mirror the existing disk.

RAID can be implemented via hardware or software. With Hardware RAID the RAID configuration is done directly on the hardware. Typically this is configured in the controller itself or in the system’s BIOS. This How To is about Software RAID and is therefore done from the existing running OS.

In this scenario I have an existing running Ubuntu 22.04 LTS Virtual Machine server running and I have added two 1 GB SCSI drives.

List Disks

The first step is to get information about the disks on your system:

$ sudo lshw -class disk
  *-cdrom                   
       description: DVD reader
       product: CD-ROM
       vendor: VBOX
       physical id: 0.0.0
       bus info: scsi@1:0.0.0
       logical name: /dev/cdrom
       logical name: /dev/sr0
       version: 1.0
       capabilities: removable audio dvd
       configuration: ansiversion=5 status=nodisc
  *-disk
       description: ATA Disk
       product: VBOX HARDDISK
       vendor: VirtualBox
       physical id: 0.0.0
       bus info: scsi@2:0.0.0
       logical name: /dev/sda
       version: 1.0
       serial: VB819a3b95-f514a971
       size: 25GiB (26GB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=e2bb1f14-9526-4dd6-be4c-5aabb4b48723 logicalsectorsize=512 sectorsize=512
  *-disk:0
       description: SCSI Disk
       product: HARDDISK
       vendor: VBOX
       physical id: 0.0.0
       bus info: scsi@3:0.0.0
       logical name: /dev/sdb
       version: 1.0
       size: 1GiB (1073MB)
       configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512
  *-disk:1
       description: SCSI Disk
       product: HARDDISK
       vendor: VBOX
       physical id: 0.1.0
       bus info: scsi@3:0.1.0
       logical name: /dev/sdc
       version: 1.0
       size: 1GiB (1073MB)
       configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512

Notice that Disk 0 and Disk 1, both SCSI disks are the two disks that will be used for the RAID 1 device. The single ATA disk is where the operating system is installed on this system. Disk 0’s logical device name is /dev/sdb and Disk 1’s logical device name is /dev/sdc.

Partition Disks

The two disks need to have a partition created on them. I do not plan on having these two data drives be bootable so I do not need to create a master boot record on them.

Use the fdisk tool to create a new partition on each drive and format them as a Linux raid autodetect file system:

$ sudo fdisk /dev/sdb

Follow these instructions:

  1. Type n to create a new partition.
  2. Type p to select primary partition.
  3. Type 1 to create /dev/sdb1.
  4. Press Enter to choose the default first sector
  5. Press Enter to choose the default last sector. This partition will span across the entire drive.
  6. Typing p will print information about the newly created partition. By default the partition type is Linux.
  7. We need to change the partition type, so type t.
  8. Enter fd to set partition type to Linux raid autodetect.
  9. Type p again to check the partition type.
  10. Type w to apply the above changes.

Do the same thing for the second drive:

$ sudo fdisk /dev/sdc

Follow the same procedure as above.

You should now have two raid devices created /dev/sdb1 and /dev/sdc1.

MDADM

The mdadm tool is used to administer Linux MD arrays (software RAID). The mdadm utility can be used to create, manage, and monitor MD (multi-disk) arrays for software RAID or multipath I/O.

View or examine the two devices with mdadm:

$ sudo mdadm --examine /dev/sdb /dev/sdc
$ sudo mdadm --examine /dev/sdb /dev/sdc
/dev/sdb:
   MBR Magic : aa55
Partition[0] :      2095104 sectors at         2048 (type fd)
/dev/sdc:
   MBR Magic : aa55
Partition[0] :      2095104 sectors at         2048 (type fd)

You can see that both are the type fd (Linux raid autodetect). At this stage, there’s no RAID setup on /dev/sdb1 and /dev/sdc1.

You can see that /dev/sdb1 and /dev/sdc1 don’t have RAID set up yet with this command:

$ sudo mdadm --examine /dev/sdb1 /dev/sdc1
$ sudo mdadm --examine /dev/sdb1 /dev/sdc1
mdadm: No md superblock detected on /dev/sdb1.
mdadm: No md superblock detected on /dev/sdc1.

Notice that there are no superblocks detected on those devices.

Create RAID Logical Device

Now create a RAID 1 logical device named /dev/md0 using the two devices /dev/sdb1 and /dev/sdc1:

$ sudo mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1

The –level=mirror flag creates this device as a RAID 1 device.

$ sudo mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Notice that it warns that this device is not suitable for a boot device which is what we intend.

Now check the status of the MD device like this:

$ cat /proc/mdstat

You should see something like this:

$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdc1[1] sdb1[0]
      1046528 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

This shows we now have a new, active MD device designated ‘md0’. It is configured as RAID 1 and is comprised of /dev/sdb1 and /dev/sdc1.

You can get additional information with the following command:

$ sudo mdadm --detail /dev/md0

You should see details that look somewhat like this:

$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Mar 14 20:58:52 2023
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Tue Mar 14 20:58:57 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : firefly:0  (local to host firefly)
              UUID : b501eeef:a0a505d6:16516baa:ea80a5b8
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

This shows the details of the RAID device.

You can get even more details by drilling down to the two RAID devices like this:

$ sudo mdadm --examine /dev/sdb1 /dev/sdc1

You should see something like this:

$ sudo mdadm --examine /dev/sdb1 /dev/sdc1
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : b501eeef:a0a505d6:16516baa:ea80a5b8
           Name : firefly:0  (local to host firefly)
  Creation Time : Tue Mar 14 20:58:52 2023
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 2093056 sectors (1022.00 MiB 1071.64 MB)
     Array Size : 1046528 KiB (1022.00 MiB 1071.64 MB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 08c16292:aa06d46a:a0646f22:7bb0bd42

    Update Time : Tue Mar 14 20:58:57 2023
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : 9cea23ea - correct
         Events : 17


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : b501eeef:a0a505d6:16516baa:ea80a5b8
           Name : firefly:0  (local to host firefly)
  Creation Time : Tue Mar 14 20:58:52 2023
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 2093056 sectors (1022.00 MiB 1071.64 MB)
     Array Size : 1046528 KiB (1022.00 MiB 1071.64 MB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 40d5d968:1b4aefaa:1d2a61a5:57e8d9a1

    Update Time : Tue Mar 14 20:58:57 2023
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : 958a78ee - correct
         Events : 17


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

This shows even more details about the two devices that are configured as part of md0.

Make the configs persistent

If you were to reboot the system now the mdadm device would not be persistent and you would lose your device md0.

To save the configs and make them persistent do the following:

$ sudo mdadm --detail --scan --verbose | sudo tee -a /etc/mdadm/mdadm.conf

This writes appends a couple of lines to the /etc/mdadm/mdadm.conf file.

To make it persistent across reboots do:

$ sudo update-initramfs -u

Reboot the server to ensure the configs are persistent

Once the server has booted, check to see what your MD device is:

$ cat /proc/mdstat
$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active (auto-read-only) raid1 sdc1[1] sdb1[0]
      1046528 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

Your new MD Device md0 still exists as a Raid 1 device.

Configure Logical Volume Management (LVM)

Now you will need to configure LVM on the RAID Device.

Create a Physical Volume on the MD Device:

$ sudo pvcreate /dev/md0
$ sudo pvcreate /dev/md0
  Physical volume "/dev/md0" successfully created.

Confirm that you have a new Physical Volume:

$ sudo pvdisplay /dev/md0
$ sudo pvdisplay /dev/md0
  "/dev/md0" is a new physical volume of "1022.00 MiB"
  --- NEW Physical volume ---
  PV Name               /dev/md0
  VG Name               
  PV Size               1022.00 MiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               7zbtfn-EmYy-L8SI-VGsR-JXLU-yRge-3iddqw

The next step is to create a Volume Group on the Physical Volume. I am going to name the Volume Group ‘vg_raid‘:

$ sudo vgcreate vg_raid /dev/md0
$ sudo vgcreate vg_raid /dev/md0
  Volume group "vg_raid" successfully created

Confirm the Volume Group was created as expected:

$ sudo vgdisplay vg_raid
$ sudo vgdisplay vg_raid
  --- Volume group ---
  VG Name               vg_raid
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1020.00 MiB
  PE Size               4.00 MiB
  Total PE              255
  Alloc PE / Size       0 / 0   
  Free  PE / Size       255 / 1020.00 MiB
  VG UUID               qW9wd6-jGAr-LiV4-3xG0-mm8J-YP02-rEQGMs

Now create the Logical Volume on the Volume Group. I will name the Logical Volume ‘lv_raid‘ I will also use the maximum space allowed on the Volume Group :

$ sudo lvcreate -n lv_raid -l 100%FREE vg_raid
$ sudo lvcreate -n lv_raid -l 100%FREE vg_raid
  Logical volume "lv_raid" created.

Confirm creation of the Logical Volume with the lvdisplay command like so:

$ sudo lvdisplay vg_raid/lv_raid
$ sudo lvdisplay vg_raid/lv_raid
  --- Logical volume ---
  LV Path                /dev/vg_raid/lv_raid
  LV Name                lv_raid
  VG Name                vg_raid
  LV UUID                FZZEIT-stdx-RuAR-vA4Y-MSHY-Itia-NTzG3b
  LV Write Access        read/write
  LV Creation host, time firefly, 2023-03-14 23:45:14 +0000
  LV Status              available
  # open                 0
  LV Size                1020.00 MiB
  Current LE             255
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

Create Filesystem

Now you can create a file system on the Logical Volume. Since the file system type I am using on the other drive is ext4 I will use the same:

$ sudo mkfs.ext4 -F /dev/vg_raid/lv_raid
$ sudo mkfs.ext4 -F /dev/vg_raid/lv_raid
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 261120 4k blocks and 65280 inodes
Filesystem UUID: 40420f4a-f470-47d5-a53d-922829cde8f3
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

Mount the Logical Volume

Now that you have the file system created you are ready to mount it somewhere.

Create a mount point for mounting the Logical Volume. I’m going to create ‘/mnt/Raid‘:

$ sudo mkdir /mnt/Raid

Now you can mount the Logical Volume on the new mount point:

$ sudo mount /dev/vg_raid/lv_raid /mnt/Raid

You can now optionally set the ownership of the mount point:

$ sudo chown mac:mac /mnt/Raid/

Check to see if your Logical Volume is mounted:

$ mount | grep raid

You should see something similar to the following:

$ mount | grep raid
/dev/mapper/vg_raid-lv_raid on /mnt/Raid type ext4 (rw,relatime)

You can see that the /dev/mapper device is /dev/mapper/vg_raid-lv_raid.

Automatically Mount when booting

To automatically mount the Raid Device when booting edit the /etc/fstab file and add a section that looks like this:

# Mount Raid
/dev/mapper/vg_raid-lv_raid   /mnt/Raid   ext4   defaults   0   0

Now un-mount the file system:

$ sudo umount /dev/mapper/vg_raid-lv_raid

Then re-mount with the entry in /etc/fstab:

$ sudo mount -a

Then verify it was remounted:

$ mount | grep raid

You should now be able to reboot the system and it will automatically mount this Raid Device.

Test

Test Auto Mount and Read & Write to filesystem

Now test your setup by writing a test file rebooting, confirm RAID is working and write to the file again.

Create a test file in /mnt/Raid:

$ vi /mnt/Raid/testfile.txt

Write some information to the file and save the file.

Reboot your system:

$ sudo shutdown -r now

Once you have logged back into your system, confirm that the md0 device is active:

$ sudo mdadm --detail /dev/md0

Examine the two partitions that comprise the mdadm device

$ sudo mdadm --examine /dev/sdb1 /dev/sdc1

Confirm the system automatically mounted the Raid device:

$ mount | grep raid

View the test file you created earlier:

$ ls -l /mnt/Raid/

Then confirm it’s content:

$ cat /mnt/Raid/testfile.txt

Then edit the file, add additional text to the file, save it, then confirm it.

$ vi /mnt/Raid/testfile.txt

Test Drive Failure

Now test your setup by simulating a failure in one of the drives.

Remember that we created file system on a Logical Volume on top of a mdadm raid device comprised of two partitions, each partition on it’s own physical device, (/dev/sdb & /dev/sdc). Then we mounted that file system on /mnt/Raid. We can easily see this graphically with the lsblk command like this:

$ lsblk /dev/sdb /dev/sdc

You should see something like the following:

$ lsblk /dev/sdb /dev/sdc
NAME                  MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
sdb                     8:16   0    1G  0 disk  
└─sdb1                  8:17   0 1023M  0 part  
  └─md0                 9:0    0 1022M  0 raid1 
    └─vg_raid-lv_raid 253:1    0 1020M  0 lvm   /mnt/Raid
sdc                     8:32   0    1G  0 disk  
└─sdc1                  8:33   0 1023M  0 part  
  └─md0                 9:0    0 1022M  0 raid1 
    └─vg_raid-lv_raid 253:1    0 1020M  0 lvm   /mnt/Raid

This shows the mount point /mnt/Raid that was created on the Logical Volume which lives on the md0 raid device comprised of two partitions that each live on their own disk.

Lets assume that at some point in the future, disk sdb starts to degrade and will soon fail and you need to replace the disk.

Write all cache to disk:

$ sudo sync

Un-mount the file system:

$ sudo umount /dev/vg_raid/lv_raid

Set the /dev/seb1 partition of md0 as failed:

$ sudo mdadm --manage /dev/md0 --fail /dev/sdb1
$ sudo mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0

Verify partition /dev/sdb1 as faulty:

$ sudo mdadm --detail /dev/md0
$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Mar 14 20:58:52 2023
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Mar 15 20:57:51 2023
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 1
     Spare Devices : 0

Consistency Policy : resync

              Name : firefly:0  (local to host firefly)
              UUID : b501eeef:a0a505d6:16516baa:ea80a5b8
            Events : 27

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       33        1      active sync   /dev/sdc1

       0       8       17        -      faulty   /dev/sdb1

You can also verify with this:

$ cat /proc/mdstat
$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdb1[0](F) sdc1[1]
      1046528 blocks super 1.2 [2/1] [_U]
      
unused devices: <none>

The (F) next to sdb1 indicates it has been marked as failed.

Now you can remove the disk with mdadm like this:

$ sudo mdadm --manage /dev/md0 --remove /dev/sdb1
$ sudo mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md0

Confirm with the cat command like before:

$ cat /proc/mdstat
$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdc1[1]
      1046528 blocks super 1.2 [2/1] [_U]
      
unused devices: <none>

The only partition listed is now sdc1

You can also confirm with the lsblk command like this:

$ lsblk /dev/sdb /dev/sdc
$ lsblk /dev/sdb /dev/sdc
NAME                  MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
sdb                     8:16   0    1G  0 disk  
└─sdb1                  8:17   0 1023M  0 part  
sdc                     8:32   0    1G  0 disk  
└─sdc1                  8:33   0 1023M  0 part  
  └─md0                 9:0    0 1022M  0 raid1 
    └─vg_raid-lv_raid 253:1    0 1020M  0 lvm

You can now shutdown the server and replace that hard drive.

Replace Hard Drive

For a Physical Machine it is easy to find the correct hard drive to remove by referencing the serial number you got from the lshw command you ran at the beginning of this How To. Replace the failed drive and power on the server.

Replace Virtual Hard Drive

For a Virtual Machine running on VirtualBox use these instructions to simulate a new hard drive.

Open the VirtualBox Virtual media Manager: File > Tools > Virtual Media Manager.

Create a New Virtual Hard Disk with the same size and type as the previous other two Virtual Disks. Take note of what you name it and where you store it.

Got to the Storage Settings of your Virtual Machine: Machine > Settings > Storage. Here you will need to remove the “bad” drive from the Virtual Machine and add the new drive. The drives were added as LsiLogic SCSI drives on SCSI Port 0 and Port 1.

In our case Port 0 should be equivalent to /dev/sdb and Port 1 should be equivalent to /dev/sdc.

Since we failed disk /dev/sdb you will need to remove the disk associated with Port 0.

Now add the new disk you created in the previous step and associate it as Port 0.

Save the config and power on the Server.

Restore the Partition

Before proceeding you should confirm that you have a new device named /dev/sdb. Do that like this:

$ sudo fdisk -l /dev/sdb

You should see something like the following:

$ sudo fdisk -l /dev/sdb
Disk /dev/sdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk model: HARDDISK        
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Notice that it is not partitioned.

Duplicate the partition from /dev/sdc to /dev/sdb:

$ sudo sfdisk -d /dev/sdc | sudo sfdisk /dev/sdb

Now that the disk is partitioned you can add it to the Raid Device:

$ sudo mdadm --manage /dev/md0 --add /dev/sdb1
$ sudo mdadm --manage /dev/md0 --add /dev/sdb1
mdadm: added /dev/sdb1

Now verify the status of the Raid Device:

$ sudo mdadm --detail /dev/md0
$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Mar 14 20:58:52 2023
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Thu Mar 16 01:30:32 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : firefly:0  (local to host firefly)
              UUID : b501eeef:a0a505d6:16516baa:ea80a5b8
            Events : 49

    Number   Major   Minor   RaidDevice State
       2       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

For larger drives it may take some time to actually sync the new device.

You can also get a summary of the device like this:

$ cat /proc/mdstat
$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdb1[2] sdc1[1]
      1046528 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

Confirm your test file is still intact:

$ cat /mnt/Raid/testfile.txt

Wrap Up

Wait a minute. What about the whole part where you have to create the Logical Volume, the file system and mount the device, you say? This is a Raid Device. When you added the new device to the Raid device it automatically rebuilt the new device to match the existing one. When you rebooted the server it automatically mounted the Raid Device from the entry in the /etc/fstab. Since it is a Raid Device it was still functioning but with only one disk. When you added the new disk to it, it automatically rebuilt the new disk to be a mirror of the existing disk.

Additional Info

Additional commands that could be important to this How To:

sudo mdadm --stop /dev/md0
sudo mdadm --remove /dev/md0
sudo mdadm --zero-superblock /dev/sdb1 /dev/sdc1

Some valuable links:

blah

Removal of mdadm RAID Devices – How to do it quickly?

How to Set Up Software RAID 1 on an Existing Linux Distribution

Mdadm – How can i destroy or delete an array : Memory, Storage, Backup and Filesystems

SSH Keys

Scenario: You just installed your Linux Server now you want to be able to SSH to that server using your SSH Keys so you don’t have to authenticate via password each time.

Assumption: You have already created your public and private SSH Keys and now you want to copy the Public Key to the Server so you can authenticate with SSH Keys.

The utility we are going to use is ssh-copy-id. This will copy your current user’s public SSH Key to the remote host.

  1. Copy the public SSH Key to the remote host like this:
    $ ssh-copy-id -i ~/.ssh/id_rsa.pub [remote-host]
  2. You will be prompted to enter the password for the New Host. It will copy over your public ssh key from ~/.ssh/id_rsa.pub
  3. You should now be able to ssh to the remote host using keys as your authentication method instead of password.

How To Sudo without password

Scenario: You just installed your Lunux Server and you are the only person using the server and you want to sudo without having to type your password all the time. This How To will show you one way of accomplishing that task.

This How To assumes you are a member of the sudo group.

  1. Check to see if you are a member of the sudo group:
    $ id
    You should see a list of all the groups you are a member of.
  2. Edit the /etc/sudoers file:
    $ sudo visudo
    This will open the the /etc/sudoers file with the default editor.
  3. There will be a line that looks like this:
    %sudo ALL=(ALL:ALL) ALL
  4. Comment out that line and replace it with a line that looks like this:
    %sudo ALL=(ALL) NOPASSWD: ALL
  5. Save the file.

You should now be able to sudo without being prompted for your password every time.

Online Certificate Status Protocol (OCSP)

Online Certificate Status Protocol (OCSP) is an alternative method to Certificate Revocation Lists (CRLs) used to check the validity of digital certificates in a public key infrastructure (PKI).

When a user encounters a digital certificate, their software can use OCSP to send a request to the certificate authority (CA) to check the current status of the certificate. The CA responds to the request with one of three responses: “good”, “revoked”, or “unknown”.

If the response is “good”, the user’s software can proceed with the transaction or access to the resource protected by the certificate. If the response is “revoked”, the software rejects the certificate as invalid. If the response is “unknown”, the software may require additional steps to verify the validity of the certificate.

Unlike CRLs, which can become large and unwieldy as the number of revoked certificates increases, OCSP allows for more efficient and timely checking of individual certificates. However, it requires a constant connection to the CA to receive real-time status updates and can be subject to performance and privacy concerns.

The Good about OCSP

  • Real-time validation: OCSP provides real-time validation of certificates, so users can immediately determine whether a certificate is valid or not.
  • Smaller and more efficient: OCSP responses are typically smaller and more efficient than certificate revocation lists (CRLs), especially for large PKIs with many revoked certificates.
  • Reduced latency: OCSP can reduce latency by eliminating the need for users to download and parse large CRL files.
  • More privacy-friendly: OCSP can be more privacy-friendly than CRLs, as it doesn’t require users to download a complete list of revoked certificates and associated information.

The Bad about OCSP

  • Increased network traffic: OCSP requires users to contact the certificate authority (CA) server each time a certificate is validated, which can increase network traffic and cause performance issues.
  • Single point of failure: OCSP relies on a single CA server for validation, so if the server goes down or experiences issues, users may be unable to validate certificates.
  • Reduced reliability: OCSP may be less reliable than CRLs in certain situations, such as when there are issues with the CA’s OCSP server or network connectivity.
  • Potential privacy concerns: While OCSP can be more privacy-friendly than CRLs, it still allows the CA to track which certificates are being validated and when, which may be a concern for some users.

Check the OCSP status of a Certificate

You can check an Online Certificate Status Protocol (OCSP) response with OpenSSL using the openssl ocsp command. Here is an example command:

openssl ocsp -issuer issuer_cert.pem -cert certificate.pem -url http://ocsp.server.com -text

This command checks the status of the certificate in certificate.pem by sending an OCSP request to the server at http://ocsp.server.com. The issuer_cert.pem file is the certificate of the issuer that signed the certificate.pem file. The -text option displays the response in human-readable text.

After running the command, you will receive an OCSP response that includes the status of the certificate. If the status is “good”, the certificate is valid. If the status is “revoked”, the certificate has been revoked by the issuer. If the status is “unknown”, the server was unable to provide a definitive response for the certificate.

Get the Certificate from a Site:

Lets use google.com as an example.

Get the Certificate for google.com and save it to a file named certificate.pem:

openssl s_client -connect google.com:443 -showcerts /dev/null | sed -n '/Certificate/,/-----END CERTIFICATE-----/p' | tail -n +3 > certificate.pem

Get the Issuing Cert from a Site:

Get the issuing certificate for google.com and save it to a file named issuer.pem:

openssl s_client -connect google.com:443 -showcerts /dev/null | sed -n '/1 s:/,/-----END CERTIFICATE-----/p' | tail -n +3 > issuer.pem

Extract the OCSP URL from the Certificate:

Use OpenSSL to get the OCSP URL from the Certificate and save it to a variable name ocspurl:

ocspurl=$(openssl x509 -in certificate.pem -noout -text | grep "OCSP" | cut -f2,3 -d:)

Test the OCSP Status of the Certificate:

Check the status of the OCSP status of the certificate using the ocsp flag to OpenSSL like this:

openssl ocsp -issuer issuer.pem -cert certificate.pem -url $ocspurl -text

You should get a response that looks something like this:

OCSP Request Data:
    Version: 1 (0x0)
    Requestor List:
        Certificate ID:
          Hash Algorithm: sha1
          Issuer Name Hash: 12D78B402C356206FA827F8ED8922411B4ACF504
          Issuer Key Hash: A5CE37EAEBB0750E946788B445FAD9241087961F
          Serial Number: 0CD04791FC985ABB27E20A42A232FDF5
    Request Extensions:
        OCSP Nonce: 
            0410CD24FED402FF2B1D2331485C81AD1C21
OCSP Response Data:
    OCSP Response Status: successful (0x0)
    Response Type: Basic OCSP Response
    Version: 1 (0x0)
    Responder Id: A5CE37EAEBB0750E946788B445FAD9241087961F
    Produced At: Apr 26 00:54:27 2023 GMT
    Responses:
    Certificate ID:
      Hash Algorithm: sha1
      Issuer Name Hash: 12D78B402C356206FA827F8ED8922411B4ACF504
      Issuer Key Hash: A5CE37EAEBB0750E946788B445FAD9241087961F
      Serial Number: 0CD04791FC985ABB27E20A42A232FDF5
    Cert Status: good
    This Update: Apr 26 00:39:01 2023 GMT
    Next Update: May  2 23:54:01 2023 GMT

    Signature Algorithm: ecdsa-with-SHA256
         30:45:02:20:45:c2:eb:e2:54:23:2a:c5:49:47:c2:f0:0b:cf:
         8d:06:6d:17:62:26:2e:4a:ba:8e:cd:61:bf:dd:af:e8:ea:cb:
         02:21:00:94:bd:5c:33:e7:ac:20:50:d4:15:45:9e:d8:8d:75:
         1a:fb:c5:95:5f:11:c7:b2:88:47:0a:5b:56:d0:3c:89:b5
WARNING: no nonce in response
Response verify OK
certificate.pem: good
	This Update: Apr 26 00:39:01 2023 GMT
	Next Update: May  2 23:54:01 2023 GMT

OpenSSL OCSP Commands Documentation

Online Certificate Status Protocol command

https://www.openssl.org/docs/man3.0/man1/openssl-ocsp.html