Squoggle
Mac's tech blog
Category Archives: Sys Admin
Certificate Revocation List (CRL)
Posted by on December 15, 2022
Certificate Revocation Lists (CRLs) are used in public key infrastructure (PKI) to identify digital certificates that have been revoked by the certificate authority (CA) before their expiration date.
When a CA revokes a digital certificate, it adds the certificate’s serial number to the CRL. The CRL is then distributed to users who rely on the PKI, such as web browsers and other software that verify digital certificates.
When a user encounters a digital certificate that has been revoked, their software checks the CRL to confirm that the certificate is no longer valid. If the certificate’s serial number is listed on the CRL, the software will reject the certificate and prevent the user from accessing the website or resource protected by the certificate.
CRL Expiration
The client typically gets a new Certificate Revocation List (CRL) from the Certificate Authority (CA) when the existing CRL expires or when there have been changes to the status of certificates that have been revoked.
When a CA revokes a digital certificate, it adds the certificate’s serial number to the CRL. The CRL contains a list of all the revoked certificates, along with their revocation status and the reason for revocation.
The CRL has an expiration date and time, after which it is no longer considered valid. The expiration date is typically set by the CA when the CRL is issued, and it is usually a few days to a few weeks after the issue date. When the CRL is about to expire, the client will check with the CA to obtain a new CRL that is valid for the next period.
In addition to the expiration date, the client may also obtain a new CRL if there are changes to the revocation status of certificates that have been previously listed in the CRL. This can happen if a certificate that was previously revoked is now reinstated, or if a certificate that was previously valid is now revoked.
The client can obtain a new CRL from the CA via various means, such as through online updates or downloads. Some PKIs also use alternative methods of certificate revocation, such as Online Certificate Status Protocol (OCSP), which can provide real-time updates on the status of certificates.
The Good about CRL
- Offline validation: CRLs can be downloaded and stored offline, allowing users to validate certificates even when they are not connected to the network.
- No single point of failure: Unlike OCSP, CRLs don’t rely on a single server for validation, so they are less susceptible to single points of failure.
- Better reliability: CRLs may be more reliable than OCSP in certain situations, such as when the CA’s OCSP server or network connectivity is experiencing issues.
- Can cover multiple certificates: A single CRL can cover multiple certificates, reducing the amount of data that needs to be downloaded and parsed.
The Bad about CRL
- Larger size: CRLs can become large and unwieldy as the number of revoked certificates increases, leading to longer download times and increased storage requirements.
- Increased latency: CRLs can introduce latency into the certificate validation process, as users must download and parse the entire CRL before they can validate a certificate.
- May be outdated: CRLs are typically updated on a periodic basis, so there is a risk that a certificate may have been revoked between updates and the user may not be aware of it.
- May present a privacy risk: CRLs can potentially expose information about revoked certificates, which could be used by attackers to gather information about a PKI.
Overall, CRLs can be an effective means of validating certificates in a PKI, especially in situations where offline validation is important or when the number of revoked certificates is relatively small. However, they also have some drawbacks that should be considered, such as larger size, increased latency, and potential privacy risks.
Delta CRL
A Delta Certificate Revocation List (CRL) is a type of CRL that contains only the revoked certificates that have been added or changed since the previous CRL was issued. The Delta CRL is meant to be used in conjunction with the base CRL, which contains the complete list of revoked certificates.
The Delta CRL is a more efficient way of distributing certificate revocation information, as it contains only the changes to the previous CRL, rather than the entire list of revoked certificates. This can significantly reduce the size of the CRL and the time it takes to download and process it.
To use a Delta CRL, the client first downloads the base CRL, which contains the complete list of revoked certificates. The client then downloads the Delta CRL, which contains only the changes since the previous CRL. The client then merges the Delta CRL with the base CRL to obtain a complete and up-to-date list of revoked certificates.
The use of Delta CRLs can help to improve the efficiency of certificate revocation in large PKIs, especially when the number of revoked certificates is high and changes occur frequently. However, the use of Delta CRLs also requires additional management and coordination between the CA and the client, as both parties must ensure that the Delta CRL is properly applied and merged with the base CRL.
Troubleshooting CRL
Sometimes you may need to troubleshoot certificate issues by examining a CRL (Certificate Revocation List)
Download a CRL
These instructions show how you can easily download a CRL from a website. I’ll use https://duckduckgo.com/ in this example.
- Open Google Chrome. Navigate to https://duckduckgo.com/. Notice the padlock in the address bar.
- Right click on the padlock in the address bar. Click Connection is secure to see the connection details.
- Click Certificate is valid to open the certificate details box. Click the Details tab.
- In the Certificate Fields box, scroll down and click on CRL Distribution Points. In the Field Value box you will see any URLs associated with the CRL for the Certificate Authority or the Signing Certificate.
- Copy and paste the URL into a new window of the browser. You will be prompted to save the file. In my case I downloaded a file named DigiCertTLSRSASHA2562020CA1-4.crl.
Parse the CRL
- Open a terminal in the directory where you saved the CRL.
- Check to see if the CRL is in DER format or PEM format. Most CRLs are in DER format. If you do a simple head command on the CRL file you will see if it is a DER (binary) file or a PEM file. If it is binary you will see gibberish. If it is a PEM formatted file you will see ,“BEGIN X509 CRL—–”.
- Parse the CRL. If the CRL is in DER format use this syntax:
openssl crl -inform DER -text -noout -in [crl-file] | less
If the CRL is in PEM format use this syntax:openssl crl -inform PEM -text -noout -in [crl-file] | less - You will see a list of all the revoked certificates that were issued by the Issuing Certificate.
OpenSSL CRL Commands Documentation
The OpenSSL CRL commands official documentation:
TLS 1.2 vs. TLS 1.3: Exploring the Key Differences and Advancements in Security
Posted by on November 17, 2022
Introduction
Transport Layer Security (TLS) is a widely-used cryptographic protocol that provides secure communications over a computer network, such as the Internet. TLS ensures that the data transmitted between a client and a server is encrypted and protected from eavesdropping and tampering. In this blog post, we will discuss the key differences between TLS 1.2 and TLS 1.3, the latest version of the protocol, and explore how TLS 1.3 offers improved security, performance, and privacy compared to its predecessor.
Faster and More Efficient Handshake Process
One of the most significant improvements in TLS 1.3 is the streamlined and efficient handshake process. In most cases, TLS 1.3 reduces the number of round trips between the client and server to just one, speeding up the connection establishment. This improvement is particularly beneficial for latency-sensitive applications like web browsing, providing a more responsive user experience.
Modern and Secure Cryptographic Algorithms
TLS 1.3 supports only modern and secure cryptographic algorithms, removing outdated and vulnerable ciphers that were still allowed in TLS 1.2. By eliminating weak ciphers and focusing on strong encryption techniques, TLS 1.3 offers better resistance to attacks and cryptographic weaknesses. For example, TLS 1.3 no longer supports the RSA key exchange, which is vulnerable to several attacks.
Mandatory Forward Secrecy
Forward secrecy is a security feature that ensures that even if a server’s private key is compromised, past communication sessions cannot be decrypted. While forward secrecy was optional in TLS 1.2, it is mandatory in TLS 1.3. This is achieved by using ephemeral (short-lived) keys for each session, which are discarded after use, further enhancing the security of the protocol.
Simplified Protocol Design
TLS 1.3 boasts a simpler and cleaner design compared to TLS 1.2, as it has removed many features and options that were either outdated or considered insecure. This streamlined design makes the protocol easier to implement, understand, and analyze, reducing the likelihood of implementation errors and security vulnerabilities.
Zero Round-Trip Time (0-RTT) Resumption
A new feature introduced in TLS 1.3 is the 0-RTT resumption, which allows clients to send encrypted data to a server during the initial handshake, without waiting for the handshake to complete. This can significantly improve performance in certain scenarios, such as when a client is reconnecting to a previously-visited server. However, this feature can also introduce some security risks, and its use should be carefully evaluated.
Conclusion
TLS 1.3 offers several advantages over TLS 1.2, including improved security, performance, and privacy. Its adoption has been growing steadily, and it is now the recommended version for securing communications over the Internet. However, it is important to note that while TLS 1.3 is superior, TLS 1.2 is still considered secure when properly configured with modern ciphers and settings. By understanding the key differences between these two versions, organizations can make informed decisions about their security infrastructure and ensure the highest level of protection for their users.
The TLS 1.2 Handshake Explained: Securing Your Online Data with a Twist
Posted by on October 13, 2022
Introduction
Howdy, folks! In today’s digital age, the need for secure online communication is more important than ever. And that’s where the Transport Layer Security (TLS) protocol comes in. It’s the trusty sidekick that keeps your sensitive data safe from prying eyes. In this blog post, we’re going to take a down-home look at the TLS 1.2 handshake process to help you understand how it ensures secure communication between your computer and the websites you visit.
1. The Meet and Greet
When you decide to visit a secure website, your computer (the client) and the website’s server start a friendly little dance called the TLS handshake. The first step of this dance is the “Client Hello” message, where your computer sends a list of its preferred cryptographic algorithms and a random number to the server. It’s sort of like saying, “Howdy, partner! These are the steps I know. What about you?”
2. The Server’s Response
Next, the server picks the best matching cryptographic algorithms and sends a “Server Hello” message back to the client, sharing its own random number. In addition, the server sends its digital certificate, which is like a digital ID card, to prove its identity. It’s the server’s way of saying, “Well, howdy! I reckon we can dance to the same tune. Here’s my ID, just so you know I’m legit.”
3. Checking Credentials
Your computer takes a gander at the server’s certificate and verifies it with the certificate authority (CA) that issued it. If everything checks out, your computer says, “Well, alrighty then! You seem like a fine partner for this dance.”
4. The Secret Handshake
Now that both sides have agreed on the steps, it’s time to create a secret key for encrypting and decrypting the data. Your computer generates a “pre-master secret” and encrypts it with the server’s public key from its certificate. This encrypted pre-master secret is then sent back to the server, which decrypts it with its private key. It’s like sharing a secret handshake that only the two of them will know.
5. Securing the Dance Floor
With the pre-master secret securely exchanged, both your computer and the server derive the same “master secret.” From this master secret, they generate symmetric encryption keys and other required cryptographic material. It’s like setting up a private dance floor, so no one can see or interfere with your moves.
6. The Final Steps
Finally, both the client and the server send “Change Cipher Spec” and “Finished” messages to each other, indicating that they’re ready to start using the newly established encryption keys. It’s like saying, “Alright, partner, let’s start dancing with our new secret steps!”
Conclusion
And there you have it, folks! That’s the TLS 1.2 handshake in a nutshell. This trusty process keeps your online chats safe and sound, ensuring that your sensitive data is encrypted and secure from eavesdroppers. So the next time you visit a secure website or send a confidential email, remember to tip your hat to the hardworking TLS 1.2 handshake that keeps your information safe as houses.
CentOS Drive Testing
Posted by on September 23, 2022
My Server was making noises that were uncharacteristic. This is how I tested my hard drives for failure.
- Install smartmontools:
# yum install smartmontools - Get a listing of all your hard drives:
# lsblk - Run a test on one of the hard drives:
# smartctl -t short /dev/sda
You will see something similar to the following:smartctl 7.0 2018-12-30 r4883 [x86_64-linux-3.10.0-693.11.6.el7.x86_64] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF OFFLINE IMMEDIATE AND SELF-TEST SECTION ===
Sending command: "Execute SMART Short self-test routine immediately in off-line mode".
Drive command "Execute SMART Short self-test routine immediately in off-line mode" successful.
Testing has begun.
Please wait 2 minutes for test to complete.
Test will complete after Fri Sep 23 13:02:21 2022
Use smartctl -X to abort test. - It will give you a time when you can check the results. When the time has elapsed, come back and check the results like this:
# smartctl -H /dev/sda
smartctl 7.0 2018-12-30 r4883 [x86_64-linux-3.10.0-693.11.6.el7.x86_64] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED - If the test fails you will see something like this:
# smartctl -H /dev/sdb
smartctl 7.0 2018-12-30 r4883 [x86_64-linux-3.10.0-693.11.6.el7.x86_64] (local build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: FAILED!
Drive failure expected in less than 24 hours. SAVE ALL DATA.
Failed Attributes:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
5 Reallocated_Sector_Ct 0x0033 063 063 140 Pre-fail Always FAILING_NOW 1089 - Looks like you need to replace /dev/sdb
How to Replace the Hard drive
This is what I did to replace the hard drive.
- Install lshw package:
# yum install lshw - Now list hardware of type disk:
# lshw -class disk
You should get way to much info. - Filter the info with grep like so:
# lshw -class disk | grep -A 5 -B 6 /dev/sdb
You should now only get the one drive you are looking for.
Mine looks like this:# lshw -class disk | grep -A 5 -B 6 /dev/sdb
*-disk:1
description: ATA Disk
product: WDC WD1002FAEX-0
vendor: Western Digital
physical id: 1
bus info: scsi@5:0.0.0
logical name: /dev/sdb
version: 1D05
serial: WD-WCATR1933480
size: 931GiB (1TB)
capabilities: partitioned partitioned:dos
configuration: ansiversion=5 logicalsectorsize=512 sectorsize=512 signature=000cd438
So it looks like I need to replace a 1TB Western Digital. Fortunately this disk is in a two disk raid array.
Remove the HD from the Raid Array
This is what I did to remove the HD from the Raid Array. Before proceeding back up everything. I do a daily offsite backup so am covered in theory.
- Redo the lsblk command from above to confirm which disk is which:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 931.5G 0 disk
└─sdb1 8:17 0 931.5G 0 part
└─md0 9:0 0 931.4G 0 raid1
└─vg_raid-lv_raid 253:4 0 931.4G 0 lvm /mnt/Raid
sdc 8:32 0 931.5G 0 disk
└─sdc1 8:33 0 931.5G 0 part
└─md0 9:0 0 931.4G 0 raid1
└─vg_raid-lv_raid 253:4 0 931.4G 0 lvm /mnt/Raid - Remember that the defective disk in this case is /dev/sdb and the good one is /dev/sdc
- Write all cache to disk:
# sync - Set the disk as failed with mdadm:
# mdadm --manage /dev/md0 --fail /dev/sdb1
This is the failed partition from /dev/sdb.
You should see something like this:mdadm: set /dev/sdb1 faulty in /dev/md0 - Confirm it has been marked as failed:
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[1] sdb1[0](F)
976630464 blocks super 1.2 [2/1] [_U]
bitmap: 0/8 pages [0KB], 65536KB chunk
The (F) next to sdb1 indicates Failed. - Now remove the disk with mdadm:
# mdadm --manage /dev/md0 --remove /dev/sdb1 - Now confirm with the cat command as before:
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[1]
976630464 blocks super 1.2 [2/1] [_U]
bitmap: 0/8 pages [0KB], 65536KB chunk
Notice that sdb1 is now gone. - You can also confirm this with the lsblk command:
#lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 931.5G 0 disk
└─sdb1 8:17 0 931.5G 0 part
sdc 8:32 0 931.5G 0 disk
└─sdc1 8:33 0 931.5G 0 part
└─md0 9:0 0 931.4G 0 raid1
└─vg_raid-lv_raid 253:4 0 931.4G 0 lvm /mnt/Raid - You can now shutdown the server and replace that hard drive.
It is easy to find the correct hard drive with the serial number you got from the lshw command you ran earlier. The serial number is:WD-WCATR1933480 - Power on server.
- Here is where I ran into an issue that left me scratching my head for quite some time. I’m documenting it here so if it happens again I can resolve it quickly.
It turns out that the spare drive I had on hand I thought was new but was not. It was actually a drive I had installed in another system that was retired and this drive had a boot partition on it. When I booted the server, that was the partition that booted instead of my regular boot partition. I even had to recover passwords on it because the user and root passwords were not the same. All along I was thinking something had happened to bork the users somehow. But it turns out the new drive I had put in was booting and it was not really new. Lesson learned here is to make sure the drive you put in has had any partitions removed. I did this by putting the drive in another system and using fdisk to remove the partitions. Now when I boot the server the normal boot partition boots and this new drive is designated as sdb as I expect. - Now you can copy the partition information from the good disk (/dev/sdc) to the new disk (/dev/sdb). Be warned that this will destroy any partition information on the new disk. Since I already destroyed any partition information in the previous step I’m good with this. The command looks like this:
# sfdisk -d /dev/sdc | sfdisk /dev/sdb - You can check the partition info is correct with the lsblk command:
#lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 931.5G 0 disk
└─sdb1 8:17 0 931.5G 0 part
sdc 8:32 0 931.5G 0 disk
└─sdc1 8:33 0 931.5G 0 part
└─md0 9:0 0 931.4G 0 raid1
└─vg_raid-lv_raid 253:2 0 931.4G 0 lvm /mnt/Raid - Now you can reverse the process and create the mirror that you previously had like this:
# mdadm --manage /dev/md0 --add /dev/sdb1 - Now you can verify the status of your raid like this:
# mdadm --detail /dev/md0
# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Tue Jun 27 17:49:31 2017
Raid Level : raid1
Array Size : 976630464 (931.39 GiB 1000.07 GB)
Used Dev Size : 976630464 (931.39 GiB 1000.07 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Sat Sep 24 14:46:35 2022
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Consistency Policy : bitmap
Rebuild Status : 1% complete
Name : Serenity.localdomain:0 (local to host Serenity.localdomain)
UUID : f06aeaae:e0c9707b:6d982f07:3f320578
Events : 114297
Number Major Minor RaidDevice State
2 8 17 0 spare rebuilding /dev/sdb1
1 8 33 1 active sync /dev/sdc1
- You can see that the ‘Rebuild Status is at 1% and that this is in a rebuilding state.
- You can get the status of the rebuild like so:
# cat /proc/mdstat
# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb1[2] sdc1[1]
976630464 blocks super 1.2 [2/1] [_U]
[>....................] recovery = 0.7% (7077312/976630464) finish=129.7min speed=124486K/sec
bitmap: 8/8 pages [32KB], 65536KB chunk
You can watch this command if it is interesting to you.
There’s something missing here. It probably relates to this:
Linux Convert Command
Posted by on September 27, 2020
This command requires that the imagemagick package be installed.
sudo apt install imagemagick
To combine two single page pdf files into one multi-page pdf:
convert file1.pdf file2.pdf merged.pdf
More to come
Create a UEFI Linux Mint USB Installation Flash Drive
Posted by on June 7, 2020
What you will need:
- A USB flash drive big enough. A 4GB flash drive should be big enough.
- The latest Linux Mint ISO image downloaded to your Windows box. I’m using version 19.3 for this.
- Rufus. Preferably the latest version. As of this writing it is version 3.10.
- Since Rufus is Windows software you will need a PC running Windows.
How to do it:
- Insert the USB drive. Determine what drive letter it is.
- Open Rufus. In the Device field choose your USB drive.
- In the Boot selection field select the ISO image for Linux Mint.
- For Partition scheme choose GPT.
- Choose NTFS for File System.
- Click the START button.
- You may see a pop up that asks you what mode to write. I have had better luck using DD image mode.
- You should then see a warning about overwriting the USB drive. If you are sure you can proceed.
To use the USB flash drive, insert it into the slot, reboot. When you see the splash screen hit F12 to get into boot options.
In my Dell PC I see a section that looks like this:
UEFI Boot: UEFI: SanDisk
Choose that to boot the Linux Mint installer OS.
IPTables
Posted by on May 4, 2020
These are my notes on IP tables. Maybe at some point I may do a complete tutorial or How To but don’t hold your breath.
Chains:
There are typically 3 chains in a standard setup. They are:
INPUT FORWARD OUTPUT
Input is things coming into the server.
Forward are things that are forwarded by the server.
Output are things that are leaving the server.
Policies:
There is a default policy (rule) set up for each chain. CentOS somes standard with the default policy of ACCEPT for each of these chains. The command line to set the policy is like this:
# iptables -P INPUT ACCEPT
Pretty simple really. The -P is the flag to set the policy.
Flush:
The command that flushes the iptables is -F. This deletes the rules in the table.
# iptables -F
If changing rules from a remote host, the tutorial says to first put the INPUT policy to ACCEPT especially if you are going to flush the table. The flush command flushes everything except the default policy so if you have set it to ACCEPT then you won’t lock yourself out. Be sure to undo the ACCEPT on the INPUT policy or it will basically be wide open unless you have some other rule locking it down.
So the first two commands, accept on the input policy and flush on the table leave you with a pretty much blank rule set.
Saving:
It’s important to understand that the commands take effect immediately so if you do a wrong command you can lock yourself out. However they are not permanently stored until you save them with this:
# service iptables save
If you did lock yourself out then theoretically you could reboot the server before saving and it would revert back to whatever it was on the last save. I have not tested this yet but that is what I understand.
Showing:
You need to be able to see the results of your commands so you can show your tables like this:
iptables -L
This leaves a bit to be desired as it shows everything and may be too much information. This will show all Chains. If you just want to list one of the Chains you can do it like this:
# iptables -L [CHAIN]
For example:
# iptables -L INPUT
Even more useful is to list with Line Numbers. This is helpful if you want to insert a rule after a certain existing rule. That command looks like this:
# iptables -L --line-numbers
or
# iptables -L INPUT --line-numbers
Even better is using the -v or the verbose flag. That’s probably the best.
# iptables -L INPUT -v --line-numbers
Adding Rules:
Rules are added or deleted to the table by the -A or -D flag. The -A appends a rule and the -D deletes a rule. For example:
# iptables -A INPUT -i lo -j ACCEPT
This will allow everything to reach the lo interface. This is a good idea as programs running on the server interact with the lo interface.
By the way, the -i flag specifies an interface. The -j flag is the jump flag. In the above example if something comes in (INPUT) on the lo interface, then jump to ACCEPT.
It is also important to note that rules are put into the table in the order they are typed in using the -A (append) command. You need to be sure you do not give permissions to something and then take it away later. It is also a good idea to set the policies for INPUT and FORWARD to drop then specifically set up the exceptions to this with the rules.
Deleting Rules:
The -D flag can be used with line numbers and is useful to delete specific lines in your IPTables config. The Delete command is done like this:
# iptables -D INPUT 4
In other words, deleting from the INPUT chain rule number 4.
Inserting Rules:
Rules need to be in certain orders or you could cause problems. You can insert a rule to the table with this:
# iptables -I INPUT 3 -p tcp --dport 23 -j ACCEPT
This inserts at line 3 the rule to accept telnet in the INPUT chain.
IP Addresses:
You can also specify IP addresses in a rule. For example if I wanted to specify that I wanted to accept connections coming from a certain source IP address I would use something like this:
# iptables -A INPUT -s 192.168.0.4 -j ACCEPT
The -s means source IP.
You can also specify entire networks like this:
# iptables -A INPUT -s 192.168.0.0/24 -j ACCEPT
Comments:
Comments can also be added. This is useful if you are putting the lines into a script or something. Everything after the “#” is ignored.
# iptables -A INPUT -s 192.168.0.0/24 -j ACCEPT # using standard slash notation
Mac Addresses:
You can also filter by mac addresses in a rule. Something like this:
# iptables -A INPUT -m mac --mac-source 00:26:B9:D1:D9:6B -j ACCEPT
Anything from the specified source mac address will be accepted by the above rule.
You can also add an IP address as well as a mac address for further filtering:
# iptables -A INPUT -s 192.168.0.4 -m mac --mac-source 00:50:8D:FD:E6:32 -j ACCEPT
The above will append the rule to the end of the chain. It is probably better to insert it somewhere like this:
# iptables -I INPUT 75 -s 192.168.0.4 -m mac --mac-source 00:50:8D:FD:E6:32 -j ACCEPT
Protocols & Ports:
To further refine you really need protocols and ports defined in a lot of the rules. Going back to this example:
# iptables -I INPUT 3 -p tcp --dport 23 -j ACCEPT
The -p means protocol, in this case TCP and the --dport means destination port, in this case port 23 or telnet.
To get more granular on the rules you will want to put them together. For ecample I want to accept on the INPUT chain connections coming from 192.168.0.0/24 and port 23 or telnet. The rule would look like this:
# -A INPUT -s 192.168.0.0/24 -p tcp --dport 23 -j ACCEPT
Drop & Reject:
The default configuration of IP Tables for CentOS is to have the INPUT, FORWARD & OUTPUT policies set to ACCEPT. This is probably not a good idea. Depending on your security posture it might be OK for your OUTPUT policy, unless you want to limit what goes out of your system. The way you solve this issue is to add some REJECT rules to the end of your configs. A REJECT rule looks something like this:
# iptables -A INPUT -j REJECT --reject-with icmp-host-prohibited # iptables -A FORWARD -j REJECT --reject-with icmp-host-prohibited
A REJECT will send a TCP Reject whereas a DROP simply drops the connection. Depending on your security posture a DROP might be a better decision.
States:
The State Match
The most useful match criterion is supplied by the `state’ extension, which interprets the connection-tracking analysis of the `ip_conntrack’ module. This is highly recommended.
Specifying `-m state’ allows an additional `–state’ option, which is a comma-separated list of states to match (the `!’ flag indicates not to match those states). These states are:
NEW: A packet which creates a new connection.
ESTABLISHED: A packet which belongs to an existing connection (i.e., a reply packet, or outgoing packet on a connection which has seen replies).
RELATED: A packet which is related to, but not part of, an existing connection, such as an ICMP error, or (with the FTP module inserted), a packet establishing an ftp data connection.
INVALID: A packet which could not be identified for some reason: this includes running out of memory and ICMP errors which don’t correspond to any known connection. Generally these packets should be dropped.
An example of this powerful match extension would be:
# iptables -A FORWARD -i ppp0 -m state ! --state NEW -j DROP
The default iptables configuration has the ESTABLISHED and RELATED states set to ACCEPT on the INPUT chain like this:
# iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
Probably a good idea to keep that and put it first in the list. This will keep any connections going that have already been established. For example if you have an ssh connection going and you change the iptables for SSH and lock yourself out the connection will remain until you close the connection. This has saved me at least once but you could potentially keep bad connections going as well so use with care.
I have also seen that it is a good idea to use states with regular rules like this:
# iptables -A INPUT -s 192.168.0.0/24 -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
I don’t completely understand it myself. Things should work without it. I’ve just seen posts about it and think it is useful to mention here.
Another good thing to use might be the INVALID state. INVALID means a packet that could not be identified somehow. Just drop them:
# iptables -A INPUT -m state --state INVALID -j DROP # iptables -A FORWARD -m state --state INVALID -j DROP # iptables -A OUTPUT -m state --state INVALID -j DROP
Script:
Here’s a little script that sets up pretty much what I talked about on this page:
#!/bin/bash # iptables example configuration script # Flush all current rules from iptables iptables -F # Set default policies for INPUT, FORWARD and OUTPUT chains iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT ACCEPT # Drop invalid packets iptables -A INPUT -m state --state INVALID -j DROP iptables -A FORWARD -m state --state INVALID -j DROP iptables -A OUTPUT -m state --state INVALID -j DROP # Accept packets belonging to established and related connections iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow private lan to ping iptables -A INPUT -s 192.168.0.0/24 -p icmp -j ACCEPT # Set access for localhost iptables -A INPUT -i lo -j ACCEPT # Allow private lan to ssh iptables -A INPUT -s 192.168.0.0/24 -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT # Save settings /sbin/service iptables save # List rules iptables -L -v --line-numbers
Validate a Certificate’s Chain
Posted by on April 30, 2020
You can validate a Certificate’s chain by extracting the Authority Key Identifier from the cert like this:
$ openssl x509 -text -in [cert-name].crt | grep -A 1 "Authority Key Identifier"
You should get a result similar to this:
X509v3 Authority Key Identifier: keyid:FC:8A:50:BA:9E:B9:25:5A:7B:55:85:4F:95:00:63:8F:E9:58:6B:43
This is the Key Identifier from the Intermediate Cert. This should match up with the “Subject Key Identifier” of the Intermediate Cert.
Get the “Subject Key Identifier” of the Intermediate Cert:
$ openssl x509 -text -in [intermediate-cert-name].crt | grep -A 1 "Subject Key Identifier"
You should get a result similar to this:
X509v3 Subject Key Identifier: FC:8A:50:BA:9E:B9:25:5A:7B:55:85:4F:95:00:63:8F:E9:58:6B:43
If the two identifiers match then you have the correct Intermediate Cert for the Cert.
You can now repeat the process with the Intermediate Cert and the root Cert and validate that the root cert you have is the one for the Intermediate.
The tr command
Posted by on April 1, 2020
The tr command:
It’s short for “translate” but might be easier to remember thinking of it as “truncate”. The man page has this to say about it:
DESCRIPTION
Translate, squeeze, and/or delete characters from standard input, writing to standard output.
There are probably books written on what tr can do. I’m just going to leave some notes here on how I typically use it.
The tr program reads from standard input and writes to standard output.
Convert multiple lines of text into a single line of text:
Consider a file named file containing the following data:
abcde fghij klmno pqrst uvwxy
You want to convert the multiple lines into a single line of text. You can do that using tr with something like this:
$ cat file | tr -d '\n'
The result of the command is written to standard output as:
abcdefghijklmnopqrstuvwxy
The -d option deletes. In this case we’re deleting the newline character.
Replace comma with newline:
Sometimes you need to convert a single delimited line to multiple lines. Consider the following file named file containing the following data:
abcde,fghij,klmno,pqrst,uvwxy
We can translate the comma in the file into a new line character with the following command:
$ cat file | tr ',' '\n'
The results look like this:
abcde fghij klmno pqrst uvwxy
Mount OneDrive from Linux Mint
Posted by on December 5, 2019
How to Mount OneDrive from Linux Mint
Don’t install Rclone from the standard repository. That version is too old.
Install Rclone:
cd ~/Downloads wget https://downloads.rclone.org/rclone-current-linux-amd64.deb sudo apt install ./rclone-current-linux-amd64.deb
Run the Rclone wizard:
rclone config
Select n to create a new remote:
$ rclone config 2019/12/04 20:47:41 NOTICE: Config file "/home/mac/.config/rclone/rclone.conf" not found - using defaults No remotes found - make a new one n) New remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config n/r/c/s/q>
Name is something meaningful like ‘onedrive’:
name> onedrive
Select 22 for Microsoft OneDrive:
22 / Microsoft OneDrive \ "onedrive"
You will be asked for a Microsoft App Client Id. Just hit Enter to accept the default and leave blank.
You will be asked for a Microsoft App Client Secret. Hit Enter to accept the default and leave blank.
You will be asked to edit advanced config. Type N
You will be asked to use auto config. Type y :
Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> y
Your browser should open now and ask you to sign into OneDrive. Put in your email address. Hit next then your password and check the box to keep signed in then the sign in button.
At this point I seem to be locked out of OneDrive as my sign in did not work on this computer.
I tried again and instead of doing auto config I did N to not do auto config.
Use auto config? * Say Y if not sure * Say N if you are working on a remote or headless machine y) Yes n) No y/n> n For this to work, you will need rclone available on a machine that has a web browser available. Execute the following on your machine: rclone authorize "onedrive" Then paste the result below: result>
I did the above and got a very long “token” that I was able to copy and paste into this machine.
It then asked me to choose a number from below. I selected 1 for OneDrive
Then it said it found 2 drives. Not sure why. I selected drive 0
Then I was able to exit Rclone by typing q:
Current remotes: Name Type ==== ==== onedrive onedrive e) Edit existing remote n) New remote d) Delete remote r) Rename remote c) Copy remote s) Set configuration password q) Quit config e/n/d/r/c/s/q> q
Now create a new directory:
mkdir ~/OneDrive
Now mount OneDrive:
rclone --vfs-cache-mode writes mount onedrive: ~/OneDrive
This will appear to hang your session but you can stop it by doing CTRL C
Now to start at boot up you can open Startup Applications, and in Startup Applications click Add.
After clicking Add, use the following:
Name: Mount OneDrive Command: sh -c "rclone --vfs-cache-mode writes mount onedrive: ~/OneDrive"
Recent Comments