Squoggle

Mac's tech blog

Monthly Archives: March 2023

How To Configure Linux Raid

Scenario: You have a Linux Server that has a boot drive and two additional hard drives you want to configure in a RAID array as data drives. This How To will demonstrate how to create a RAID 1 array using two physical drives in a server that is already up and running.

RAID 1 is disk mirroring. I want to mirror one disk to the other. In case one physical drive fails I can drop the bad drive from the RAID array, install a new drive and then add the new drive to the Array and rebuild the new disk to mirror the existing disk.

RAID can be implemented via hardware or software. With Hardware RAID the RAID configuration is done directly on the hardware. Typically this is configured in the controller itself or in the system’s BIOS. This How To is about Software RAID and is therefore done from the existing running OS.

In this scenario I have an existing running Ubuntu 22.04 LTS Virtual Machine server running and I have added two 1 GB SCSI drives.

List Disks

The first step is to get information about the disks on your system:

$ sudo lshw -class disk
  *-cdrom                   
       description: DVD reader
       product: CD-ROM
       vendor: VBOX
       physical id: 0.0.0
       bus info: scsi@1:0.0.0
       logical name: /dev/cdrom
       logical name: /dev/sr0
       version: 1.0
       capabilities: removable audio dvd
       configuration: ansiversion=5 status=nodisc
  *-disk
       description: ATA Disk
       product: VBOX HARDDISK
       vendor: VirtualBox
       physical id: 0.0.0
       bus info: scsi@2:0.0.0
       logical name: /dev/sda
       version: 1.0
       serial: VB819a3b95-f514a971
       size: 25GiB (26GB)
       capabilities: gpt-1.00 partitioned partitioned:gpt
       configuration: ansiversion=5 guid=e2bb1f14-9526-4dd6-be4c-5aabb4b48723 logicalsectorsize=512 sectorsize=512
  *-disk:0
       description: SCSI Disk
       product: HARDDISK
       vendor: VBOX
       physical id: 0.0.0
       bus info: scsi@3:0.0.0
       logical name: /dev/sdb
       version: 1.0
       size: 1GiB (1073MB)
       configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512
  *-disk:1
       description: SCSI Disk
       product: HARDDISK
       vendor: VBOX
       physical id: 0.1.0
       bus info: scsi@3:0.1.0
       logical name: /dev/sdc
       version: 1.0
       size: 1GiB (1073MB)
       configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512

Notice that Disk 0 and Disk 1, both SCSI disks are the two disks that will be used for the RAID 1 device. The single ATA disk is where the operating system is installed on this system. Disk 0’s logical device name is /dev/sdb and Disk 1’s logical device name is /dev/sdc.

Partition Disks

The two disks need to have a partition created on them. I do not plan on having these two data drives be bootable so I do not need to create a master boot record on them.

Use the fdisk tool to create a new partition on each drive and format them as a Linux raid autodetect file system:

$ sudo fdisk /dev/sdb

Follow these instructions:

  1. Type n to create a new partition.
  2. Type p to select primary partition.
  3. Type 1 to create /dev/sdb1.
  4. Press Enter to choose the default first sector
  5. Press Enter to choose the default last sector. This partition will span across the entire drive.
  6. Typing p will print information about the newly created partition. By default the partition type is Linux.
  7. We need to change the partition type, so type t.
  8. Enter fd to set partition type to Linux raid autodetect.
  9. Type p again to check the partition type.
  10. Type w to apply the above changes.

Do the same thing for the second drive:

$ sudo fdisk /dev/sdc

Follow the same procedure as above.

You should now have two raid devices created /dev/sdb1 and /dev/sdc1.

MDADM

The mdadm tool is used to administer Linux MD arrays (software RAID). The mdadm utility can be used to create, manage, and monitor MD (multi-disk) arrays for software RAID or multipath I/O.

View or examine the two devices with mdadm:

$ sudo mdadm --examine /dev/sdb /dev/sdc
$ sudo mdadm --examine /dev/sdb /dev/sdc
/dev/sdb:
   MBR Magic : aa55
Partition[0] :      2095104 sectors at         2048 (type fd)
/dev/sdc:
   MBR Magic : aa55
Partition[0] :      2095104 sectors at         2048 (type fd)

You can see that both are the type fd (Linux raid autodetect). At this stage, there’s no RAID setup on /dev/sdb1 and /dev/sdc1.

You can see that /dev/sdb1 and /dev/sdc1 don’t have RAID set up yet with this command:

$ sudo mdadm --examine /dev/sdb1 /dev/sdc1
$ sudo mdadm --examine /dev/sdb1 /dev/sdc1
mdadm: No md superblock detected on /dev/sdb1.
mdadm: No md superblock detected on /dev/sdc1.

Notice that there are no superblocks detected on those devices.

Create RAID Logical Device

Now create a RAID 1 logical device named /dev/md0 using the two devices /dev/sdb1 and /dev/sdc1:

$ sudo mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1

The –level=mirror flag creates this device as a RAID 1 device.

$ sudo mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

Notice that it warns that this device is not suitable for a boot device which is what we intend.

Now check the status of the MD device like this:

$ cat /proc/mdstat

You should see something like this:

$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdc1[1] sdb1[0]
      1046528 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

This shows we now have a new, active MD device designated ‘md0’. It is configured as RAID 1 and is comprised of /dev/sdb1 and /dev/sdc1.

You can get additional information with the following command:

$ sudo mdadm --detail /dev/md0

You should see details that look somewhat like this:

$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Mar 14 20:58:52 2023
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Tue Mar 14 20:58:57 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : firefly:0  (local to host firefly)
              UUID : b501eeef:a0a505d6:16516baa:ea80a5b8
            Events : 17

    Number   Major   Minor   RaidDevice State
       0       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

This shows the details of the RAID device.

You can get even more details by drilling down to the two RAID devices like this:

$ sudo mdadm --examine /dev/sdb1 /dev/sdc1

You should see something like this:

$ sudo mdadm --examine /dev/sdb1 /dev/sdc1
/dev/sdb1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : b501eeef:a0a505d6:16516baa:ea80a5b8
           Name : firefly:0  (local to host firefly)
  Creation Time : Tue Mar 14 20:58:52 2023
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 2093056 sectors (1022.00 MiB 1071.64 MB)
     Array Size : 1046528 KiB (1022.00 MiB 1071.64 MB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 08c16292:aa06d46a:a0646f22:7bb0bd42

    Update Time : Tue Mar 14 20:58:57 2023
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : 9cea23ea - correct
         Events : 17


   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
/dev/sdc1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : b501eeef:a0a505d6:16516baa:ea80a5b8
           Name : firefly:0  (local to host firefly)
  Creation Time : Tue Mar 14 20:58:52 2023
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 2093056 sectors (1022.00 MiB 1071.64 MB)
     Array Size : 1046528 KiB (1022.00 MiB 1071.64 MB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
   Unused Space : before=1968 sectors, after=0 sectors
          State : clean
    Device UUID : 40d5d968:1b4aefaa:1d2a61a5:57e8d9a1

    Update Time : Tue Mar 14 20:58:57 2023
  Bad Block Log : 512 entries available at offset 16 sectors
       Checksum : 958a78ee - correct
         Events : 17


   Device Role : Active device 1
   Array State : AA ('A' == active, '.' == missing, 'R' == replacing)

This shows even more details about the two devices that are configured as part of md0.

Make the configs persistent

If you were to reboot the system now the mdadm device would not be persistent and you would lose your device md0.

To save the configs and make them persistent do the following:

$ sudo mdadm --detail --scan --verbose | sudo tee -a /etc/mdadm/mdadm.conf

This writes appends a couple of lines to the /etc/mdadm/mdadm.conf file.

To make it persistent across reboots do:

$ sudo update-initramfs -u

Reboot the server to ensure the configs are persistent

Once the server has booted, check to see what your MD device is:

$ cat /proc/mdstat
$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active (auto-read-only) raid1 sdc1[1] sdb1[0]
      1046528 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

Your new MD Device md0 still exists as a Raid 1 device.

Configure Logical Volume Management (LVM)

Now you will need to configure LVM on the RAID Device.

Create a Physical Volume on the MD Device:

$ sudo pvcreate /dev/md0
$ sudo pvcreate /dev/md0
  Physical volume "/dev/md0" successfully created.

Confirm that you have a new Physical Volume:

$ sudo pvdisplay /dev/md0
$ sudo pvdisplay /dev/md0
  "/dev/md0" is a new physical volume of "1022.00 MiB"
  --- NEW Physical volume ---
  PV Name               /dev/md0
  VG Name               
  PV Size               1022.00 MiB
  Allocatable           NO
  PE Size               0   
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               7zbtfn-EmYy-L8SI-VGsR-JXLU-yRge-3iddqw

The next step is to create a Volume Group on the Physical Volume. I am going to name the Volume Group ‘vg_raid‘:

$ sudo vgcreate vg_raid /dev/md0
$ sudo vgcreate vg_raid /dev/md0
  Volume group "vg_raid" successfully created

Confirm the Volume Group was created as expected:

$ sudo vgdisplay vg_raid
$ sudo vgdisplay vg_raid
  --- Volume group ---
  VG Name               vg_raid
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1020.00 MiB
  PE Size               4.00 MiB
  Total PE              255
  Alloc PE / Size       0 / 0   
  Free  PE / Size       255 / 1020.00 MiB
  VG UUID               qW9wd6-jGAr-LiV4-3xG0-mm8J-YP02-rEQGMs

Now create the Logical Volume on the Volume Group. I will name the Logical Volume ‘lv_raid‘ I will also use the maximum space allowed on the Volume Group :

$ sudo lvcreate -n lv_raid -l 100%FREE vg_raid
$ sudo lvcreate -n lv_raid -l 100%FREE vg_raid
  Logical volume "lv_raid" created.

Confirm creation of the Logical Volume with the lvdisplay command like so:

$ sudo lvdisplay vg_raid/lv_raid
$ sudo lvdisplay vg_raid/lv_raid
  --- Logical volume ---
  LV Path                /dev/vg_raid/lv_raid
  LV Name                lv_raid
  VG Name                vg_raid
  LV UUID                FZZEIT-stdx-RuAR-vA4Y-MSHY-Itia-NTzG3b
  LV Write Access        read/write
  LV Creation host, time firefly, 2023-03-14 23:45:14 +0000
  LV Status              available
  # open                 0
  LV Size                1020.00 MiB
  Current LE             255
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

Create Filesystem

Now you can create a file system on the Logical Volume. Since the file system type I am using on the other drive is ext4 I will use the same:

$ sudo mkfs.ext4 -F /dev/vg_raid/lv_raid
$ sudo mkfs.ext4 -F /dev/vg_raid/lv_raid
mke2fs 1.46.5 (30-Dec-2021)
Creating filesystem with 261120 4k blocks and 65280 inodes
Filesystem UUID: 40420f4a-f470-47d5-a53d-922829cde8f3
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

Mount the Logical Volume

Now that you have the file system created you are ready to mount it somewhere.

Create a mount point for mounting the Logical Volume. I’m going to create ‘/mnt/Raid‘:

$ sudo mkdir /mnt/Raid

Now you can mount the Logical Volume on the new mount point:

$ sudo mount /dev/vg_raid/lv_raid /mnt/Raid

You can now optionally set the ownership of the mount point:

$ sudo chown mac:mac /mnt/Raid/

Check to see if your Logical Volume is mounted:

$ mount | grep raid

You should see something similar to the following:

$ mount | grep raid
/dev/mapper/vg_raid-lv_raid on /mnt/Raid type ext4 (rw,relatime)

You can see that the /dev/mapper device is /dev/mapper/vg_raid-lv_raid.

Automatically Mount when booting

To automatically mount the Raid Device when booting edit the /etc/fstab file and add a section that looks like this:

# Mount Raid
/dev/mapper/vg_raid-lv_raid   /mnt/Raid   ext4   defaults   0   0

Now un-mount the file system:

$ sudo umount /dev/mapper/vg_raid-lv_raid

Then re-mount with the entry in /etc/fstab:

$ sudo mount -a

Then verify it was remounted:

$ mount | grep raid

You should now be able to reboot the system and it will automatically mount this Raid Device.

Test

Test Auto Mount and Read & Write to filesystem

Now test your setup by writing a test file rebooting, confirm RAID is working and write to the file again.

Create a test file in /mnt/Raid:

$ vi /mnt/Raid/testfile.txt

Write some information to the file and save the file.

Reboot your system:

$ sudo shutdown -r now

Once you have logged back into your system, confirm that the md0 device is active:

$ sudo mdadm --detail /dev/md0

Examine the two partitions that comprise the mdadm device

$ sudo mdadm --examine /dev/sdb1 /dev/sdc1

Confirm the system automatically mounted the Raid device:

$ mount | grep raid

View the test file you created earlier:

$ ls -l /mnt/Raid/

Then confirm it’s content:

$ cat /mnt/Raid/testfile.txt

Then edit the file, add additional text to the file, save it, then confirm it.

$ vi /mnt/Raid/testfile.txt

Test Drive Failure

Now test your setup by simulating a failure in one of the drives.

Remember that we created file system on a Logical Volume on top of a mdadm raid device comprised of two partitions, each partition on it’s own physical device, (/dev/sdb & /dev/sdc). Then we mounted that file system on /mnt/Raid. We can easily see this graphically with the lsblk command like this:

$ lsblk /dev/sdb /dev/sdc

You should see something like the following:

$ lsblk /dev/sdb /dev/sdc
NAME                  MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
sdb                     8:16   0    1G  0 disk  
└─sdb1                  8:17   0 1023M  0 part  
  └─md0                 9:0    0 1022M  0 raid1 
    └─vg_raid-lv_raid 253:1    0 1020M  0 lvm   /mnt/Raid
sdc                     8:32   0    1G  0 disk  
└─sdc1                  8:33   0 1023M  0 part  
  └─md0                 9:0    0 1022M  0 raid1 
    └─vg_raid-lv_raid 253:1    0 1020M  0 lvm   /mnt/Raid

This shows the mount point /mnt/Raid that was created on the Logical Volume which lives on the md0 raid device comprised of two partitions that each live on their own disk.

Lets assume that at some point in the future, disk sdb starts to degrade and will soon fail and you need to replace the disk.

Write all cache to disk:

$ sudo sync

Un-mount the file system:

$ sudo umount /dev/vg_raid/lv_raid

Set the /dev/seb1 partition of md0 as failed:

$ sudo mdadm --manage /dev/md0 --fail /dev/sdb1
$ sudo mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm: set /dev/sdb1 faulty in /dev/md0

Verify partition /dev/sdb1 as faulty:

$ sudo mdadm --detail /dev/md0
$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Mar 14 20:58:52 2023
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Wed Mar 15 20:57:51 2023
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 1
     Spare Devices : 0

Consistency Policy : resync

              Name : firefly:0  (local to host firefly)
              UUID : b501eeef:a0a505d6:16516baa:ea80a5b8
            Events : 27

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8       33        1      active sync   /dev/sdc1

       0       8       17        -      faulty   /dev/sdb1

You can also verify with this:

$ cat /proc/mdstat
$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdb1[0](F) sdc1[1]
      1046528 blocks super 1.2 [2/1] [_U]
      
unused devices: <none>

The (F) next to sdb1 indicates it has been marked as failed.

Now you can remove the disk with mdadm like this:

$ sudo mdadm --manage /dev/md0 --remove /dev/sdb1
$ sudo mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm: hot removed /dev/sdb1 from /dev/md0

Confirm with the cat command like before:

$ cat /proc/mdstat
$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdc1[1]
      1046528 blocks super 1.2 [2/1] [_U]
      
unused devices: <none>

The only partition listed is now sdc1

You can also confirm with the lsblk command like this:

$ lsblk /dev/sdb /dev/sdc
$ lsblk /dev/sdb /dev/sdc
NAME                  MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
sdb                     8:16   0    1G  0 disk  
└─sdb1                  8:17   0 1023M  0 part  
sdc                     8:32   0    1G  0 disk  
└─sdc1                  8:33   0 1023M  0 part  
  └─md0                 9:0    0 1022M  0 raid1 
    └─vg_raid-lv_raid 253:1    0 1020M  0 lvm

You can now shutdown the server and replace that hard drive.

Replace Hard Drive

For a Physical Machine it is easy to find the correct hard drive to remove by referencing the serial number you got from the lshw command you ran at the beginning of this How To. Replace the failed drive and power on the server.

Replace Virtual Hard Drive

For a Virtual Machine running on VirtualBox use these instructions to simulate a new hard drive.

Open the VirtualBox Virtual media Manager: File > Tools > Virtual Media Manager.

Create a New Virtual Hard Disk with the same size and type as the previous other two Virtual Disks. Take note of what you name it and where you store it.

Got to the Storage Settings of your Virtual Machine: Machine > Settings > Storage. Here you will need to remove the “bad” drive from the Virtual Machine and add the new drive. The drives were added as LsiLogic SCSI drives on SCSI Port 0 and Port 1.

In our case Port 0 should be equivalent to /dev/sdb and Port 1 should be equivalent to /dev/sdc.

Since we failed disk /dev/sdb you will need to remove the disk associated with Port 0.

Now add the new disk you created in the previous step and associate it as Port 0.

Save the config and power on the Server.

Restore the Partition

Before proceeding you should confirm that you have a new device named /dev/sdb. Do that like this:

$ sudo fdisk -l /dev/sdb

You should see something like the following:

$ sudo fdisk -l /dev/sdb
Disk /dev/sdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk model: HARDDISK        
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Notice that it is not partitioned.

Duplicate the partition from /dev/sdc to /dev/sdb:

$ sudo sfdisk -d /dev/sdc | sudo sfdisk /dev/sdb

Now that the disk is partitioned you can add it to the Raid Device:

$ sudo mdadm --manage /dev/md0 --add /dev/sdb1
$ sudo mdadm --manage /dev/md0 --add /dev/sdb1
mdadm: added /dev/sdb1

Now verify the status of the Raid Device:

$ sudo mdadm --detail /dev/md0
$ sudo mdadm --detail /dev/md0
/dev/md0:
           Version : 1.2
     Creation Time : Tue Mar 14 20:58:52 2023
        Raid Level : raid1
        Array Size : 1046528 (1022.00 MiB 1071.64 MB)
     Used Dev Size : 1046528 (1022.00 MiB 1071.64 MB)
      Raid Devices : 2
     Total Devices : 2
       Persistence : Superblock is persistent

       Update Time : Thu Mar 16 01:30:32 2023
             State : clean 
    Active Devices : 2
   Working Devices : 2
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : resync

              Name : firefly:0  (local to host firefly)
              UUID : b501eeef:a0a505d6:16516baa:ea80a5b8
            Events : 49

    Number   Major   Minor   RaidDevice State
       2       8       17        0      active sync   /dev/sdb1
       1       8       33        1      active sync   /dev/sdc1

For larger drives it may take some time to actually sync the new device.

You can also get a summary of the device like this:

$ cat /proc/mdstat
$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
md0 : active raid1 sdb1[2] sdc1[1]
      1046528 blocks super 1.2 [2/2] [UU]
      
unused devices: <none>

Confirm your test file is still intact:

$ cat /mnt/Raid/testfile.txt

Wrap Up

Wait a minute. What about the whole part where you have to create the Logical Volume, the file system and mount the device, you say? This is a Raid Device. When you added the new device to the Raid device it automatically rebuilt the new device to match the existing one. When you rebooted the server it automatically mounted the Raid Device from the entry in the /etc/fstab. Since it is a Raid Device it was still functioning but with only one disk. When you added the new disk to it, it automatically rebuilt the new disk to be a mirror of the existing disk.

Additional Info

Additional commands that could be important to this How To:

sudo mdadm --stop /dev/md0
sudo mdadm --remove /dev/md0
sudo mdadm --zero-superblock /dev/sdb1 /dev/sdc1

Some valuable links:

blah

Removal of mdadm RAID Devices – How to do it quickly?

How to Set Up Software RAID 1 on an Existing Linux Distribution

Mdadm – How can i destroy or delete an array : Memory, Storage, Backup and Filesystems

SSH Keys

Scenario: You just installed your Linux Server now you want to be able to SSH to that server using your SSH Keys so you don’t have to authenticate via password each time.

Assumption: You have already created your public and private SSH Keys and now you want to copy the Public Key to the Server so you can authenticate with SSH Keys.

The utility we are going to use is ssh-copy-id. This will copy your current user’s public SSH Key to the remote host.

  1. Copy the public SSH Key to the remote host like this:
    $ ssh-copy-id -i ~/.ssh/id_rsa.pub [remote-host]
  2. You will be prompted to enter the password for the New Host. It will copy over your public ssh key from ~/.ssh/id_rsa.pub
  3. You should now be able to ssh to the remote host using keys as your authentication method instead of password.

Install and Set Up Ubuntu 22.04 Server

This page will walk through what I did to install Ubuntu 22.04 Server for use in my home network.

I’m installing it as a Virtual Machine using Virtual Box 7.0, but these instructions should be valid if you are installing on a Physical Machine as well which I will do once I have confirmed this is working the way I expect it to.

Virtual Machine Specs

The Virtual Machine Specs I set up for this are:

Memory4096 MB
Processor2 CPUs
Storage SATA Port 025 GB
NetworkBridged Adapter
Virtual Machine Specs

In VirtualBox 7.0 you can do what is called an Unattended Installation. I believe this is only available for certain Operating Systems but since I have not explored that option fully I am skipping it for now.

Follow these instructions to install Ubuntu 22.04 Server:

  1. Power on the Machine.
  2. When Presented with the GNU GRUB screen, select ‘Try or Install Ubuntu Server‘.
  3. Ubuntu starts the boot process.
  4. When Presented with a Language Selection Menu, select your language.
  5. Your keyboard configuration may already be selected for you. If not select your keyboard and then select ‘Done‘.
  6. Choose the Type of install. For this document I am going to choose ‘Ubuntu Server‘ and also select ‘Search for third-party drivers’.
  7. On the Network connections screen my network settings have been filled in automatically by DHCP. This is satisfactory for me so I choose ‘Done‘.
  8. On the Configure proxy screen you can choose to configure a proxy. This can be common in a corporate environment but in my case as my home network I don’t need to do this. Click ‘Done‘ when satisfied.
  9. On the Configure Ubuntu archive mirror screen you can safely click ‘Done‘ unless you know otherwise.
  10. On the Guided storage configuration screen I chose Use an entire disk and Set up this disk as an LVM Group. I did NOT encrypt. Select ‘Done‘.
  11. On the Storage configuration screen I accepted the summary and selected ‘Done‘.
  12. On the Confirm destructive action dialogue box I selected ‘Continue‘ since this is a new machine and I am confident I am not overwriting anything.
  13. On the Profile setup screen I typed my name, chose a server name, chose a user name and password then selected ‘Done‘.
  14. On the Upgrade to Ubuntu Pro screen select ‘Skip for now‘ then select ‘Continue‘.
  15. On the SSH Setup screen select ‘Install OpenSSH server’, then select ‘Done‘.
  16. On the Third-party drivers screen I see that “No applicable third-party drivers are available locally or online.” so I select ‘Continue‘.
  17. On the Featured Server Snaps screen I’m leaving it all blank. This document is about installing Ubuntu and not about snaps so I may do another document on that later. Select ‘Done‘.
  18. You will see a message that it is installing the system and then security updates. When it is ready you will be able to select ‘Reboot Now‘.
  19. Once you have rebooted you should be given a login prompt. You can now login with the user you created.
  20. When you login you will get some statistics about the system, one of which is the IP address. You can use that IP address to ssh to the host now and do some of the other things outlined in this document.

Additional Resources

Here are additional resources that will be useful in configuring your server.

Additional Useful Tools

Install additional packages:

$ sudo apt install members
$ sudo apt install net-tools

Additional Pages to review

How to Install and Configure an NFS Server on Ubuntu 22.04

How to Install and Configure an NFS Server on Ubuntu 22.04

How to Install a Desktop (GUI) on an Ubuntu Server

https://phoenixnap.com/kb/how-to-install-a-gui-on-ubuntu

How To Sudo without password

Scenario: You just installed your Lunux Server and you are the only person using the server and you want to sudo without having to type your password all the time. This How To will show you one way of accomplishing that task.

This How To assumes you are a member of the sudo group.

  1. Check to see if you are a member of the sudo group:
    $ id
    You should see a list of all the groups you are a member of.
  2. Edit the /etc/sudoers file:
    $ sudo visudo
    This will open the the /etc/sudoers file with the default editor.
  3. There will be a line that looks like this:
    %sudo ALL=(ALL:ALL) ALL
  4. Comment out that line and replace it with a line that looks like this:
    %sudo ALL=(ALL) NOPASSWD: ALL
  5. Save the file.

You should now be able to sudo without being prompted for your password every time.

Install VirtualBox 7.0 on Linux Mint 21.x

This is what I did to install VirtualBox 7.0 on my new Linux Mint 21.1 workstation.

See the VirtualBox Wiki for the deets on VirtualBox 7.0

  1. Ensure your system has been updated:
    $ sudo apt update && sudo apt upgrade -y
  2. Download the VirtualBox GPG Keys:
    $ curl https://www.virtualbox.org/download/oracle_vbox_2016.asc | gpg --dearmor > oracle_vbox_2016.gpg
    $ curl https://www.virtualbox.org/download/oracle_vbox.asc | gpg --dearmor > oracle_vbox.gpg
  3. Import the VirtualBox GPG Keys:
    $ sudo install -o root -g root -m 644 oracle_vbox_2016.gpg /etc/apt/trusted.gpg.d/
    $ sudo install -o root -g root -m 644 oracle_vbox.gpg /etc/apt/trusted.gpg.d/
  4. There does not appear to be an official repository for Linux Mint, but Linux Mint is derived from Ubuntu 22.04 which is code named ‘Jammy’. Add the Jammy VirtualBox Repository to the system:
    $ echo "deb [arch=amd64] http://download.virtualbox.org/virtualbox/debian \
    jammy contrib" | sudo tee /etc/apt/sources.list.d/virtualbox.list
  5. Update the Repositories:
    $ sudo apt update
  6. Install Linux Headers:
    $ sudo apt install linux-headers-$(uname -r) dkms
  7. Install VirtualBox:
    $ sudo apt install virtualbox-7.0
  8. Download the VirtualBox Extension Pack:
    $ cd ~/Downloads
    $ VER=$(curl -s https://download.virtualbox.org/virtualbox/LATEST.TXT)
    $ wget https://download.virtualbox.org/virtualbox/$VER/Oracle_VM_VirtualBox_Extension_Pack-$VER.vbox-extpack
  9. Install the Extension Pack:
    $ sudo VBoxManage extpack install Oracle_VM_VirtualBox_Extension_Pack-*.vbox-extpack
  10. You can now launch VirtualBox from the Desktop menu.