Mount: Fstab & Mtab #
Overview #
# Mounts at system start
vi /etc/fstab
# Mount all mountpoints from fstab
mount -a
# Currently mounted
vi /etc/mtab
# List filesystem (for fstab entry)
lsblk -f
blkid #
Blkid displays the UUID (Universally Unique Identifier), filesystem type, label, and other attributes related to disk partitions and storage devices.
Using the UUID is considered more reliable because it is unique to the filesystem and does not change, even if the disk is moved to a different port on the motherboard, or if you add another disk to the system.
# List
sudo blkid
# Shell output:
...
/dev/mapper/debian--vg-root: UUID="272d2881-7512-4941-a25c-4ecfc9d279a8" BLOCK_SIZE="4096" TYPE="ext4"
Fstab Example:
# fstab
UUID=272d2881-7512-4941-a25c-4ecfc9d279a8 / ext4 errors=remount-ro 0 1
Pass Parameter #
Fstab Example:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/disk/by-uuid/62f6cdf8-2832-4120-901d-18e7e7e37d0a /boot ext4 defaults 0 1
- Used by fsck to decide in which order filesystems are to be checked.
0
The filesystem will be ignored and not checked at boot time.1
The filesystem will be checked first. This is typically used for the root filesystem, and only one filesystem should have this value to avoid potential conflicts.2
All other filesystems that should be checked.
Mount and unmount #
# Mount file system (from fstab)
mount /mountpoint
# Unmount file system
umount /mountpoint
# Unmount file system still in use without killing processes
umount -l /mountpoint
# Terminates processes using the file system
umount -f /mountpoint
Umount busy device #
# List processes that use a mountpoint
lsof +f -- /mnt/mountpoint
# or
lsof \| grep '/mountpoint'
# Kill buisy process
pkill PID
# Unmount
umount /mountpoint
Disks & Partitions #
lsblk (List Disks) #
List block devices:
Find devices and partitions information
Example: lsblk
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 1.8G 0 part /boot
└─sda3 8:3 0 18.2G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 10G 0 lvm /
lsblk |
List disks and partitions |
lsblk -f |
List file systems (ntfs, ext4, …) |
lsblk -m |
Owner / Permissions |
lsblk /dev/sda |
List partitions of specific disk |
df (List Mountpoints & Filesystems) #
Free Disk: Disk usage and filesystem information
# List mountpoints
df -h
# Shell output:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-ubuntu--lv 17G 11G 4.8G 70% /
/dev/sda2 2.0G 251M 1.6G 14% /boot
# List filesystems
df -T
# Shell output:
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/mapper/ubuntu--vg-ubuntu--lv ext4 16849712 11020688 4947772 70% /
/dev/sda2 ext4 1992552 256672 1614640 14% /boot
fdisk #
Fixed disk or format disk:
Create, delete, resize, and modify partitions
Example: fdisk -l
Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk model: ...
...
Device Start End Sectors Size Type
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 3719167 3715072 1.8G Linux filesystem
/dev/sda3 3719168 41940991 38221824 18.2G Linux filesystem
fdisk -l |
List disks partitions |
dfisk -l /dev/sda |
List partitions of specific disk |
parted #
# Start parted for specific disk: sda
sudo parted /dev/sda
# List all partitions
sudo parted -l
# List partitions: Specific disk
sudo parted -l /dev/sda
Extend Partition #
- Check the disk & partition size
# List blockdevices
lsblk
# Shell output:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 40G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 2G 0 part /boot
└─sda3 8:3 0 33G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 16.5G 0 lvm /
- Start parted for the specific disk
# Start parted for specific disk: sda
sudo parted /dev/sda
# List partitions
print
# Shell output:
Model: VMware, VMware Virtual S (scsi)
Disk /dev/sda: 42.9GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 2097kB 1049kB bios_grub
2 2097kB 2150MB 2147MB ext4
3 2150MB 37.6GB 35.4GB
# Resize partition 3 (/dev/sda3)
resizepart 3 100%
# Quit parted
quit
- Control the disk & partition size
# Check the parition size: List blockdevices
lsblk
# Shell output:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 40G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 2G 0 part /boot
└─sda3 8:3 0 38G 0 part
└─ubuntu--vg-ubuntu--lv 253:0 0 16.5G 0 lvm /
sr0 11:0 1 1024M 0 rom
- Extend PV & LV
# Resize PV
sudo pvresize /dev/sda3
# Resize LV and filesystem
sudo lvextend -r -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
Filesystems #
mkfs #
# Create ext4 filesystem (on disk sdb)
sudo mkfs.ext4 /dev/sdb
# Create ext4 filesystem with label
sudo mkfs.ext4 -L disk-2 /dev/sdb
# Check label: lsblk
lsblk -o NAME,FSTYPE,LABEL,MOUNTPOINT
# Check label
sudo e2label /dev/sdb
# Check label: Alternative command
sudo blkid /dev/sdb
# Shell output:
/dev/sdb: LABEL="disk-2" UUID="812b5ef6-0392-4fba-aaa9-25f7dece632f" BLOCK_SIZE="4096" TYPE="ext4"
wipefs #
The wipefs command is used to wipe filesystems, RAID signatures, and partition tables from a device or partition.
# Wipe disk: For example sdb
sudo wipefs -a /dev/sdb
-a
“All” remove everything
fsck #
Check and repair filesystems.
# Open manual
man fsck
# Check a filesystem: Define disk or partition
sudo fsck /dev/sdb
# Check and repair a filesystem: Try to repair filesystem errors automatically (no prompts)
sudo fsck -a /dev/sdb
Note: Never run fsck on a mounted filesystem.
LVM #
Overview #
Physical Volume:
Example pvdisplay
--- Physical volume ---
PV Name /dev/sda3
VG Name ubuntu-vg
...
Volume Groups:
Example vgs
VG #PV #LV #SN Attr VSize VFree
ubuntu-vg 1 1 0 wz--n- 18.22g 8.22g
Logical Volumes:
Example lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
ubuntu-lv ubuntu-vg -wi-ao---- 10.00g
Physical Volumes #
# List physical volumes:
sudo pvs
# List physical volumes: Details
sudo pvdisplay
# Create physical volume
sudo pvcreate /dev/sdb
Volume Groups #
# List volumegroups
sudo vgs
# List volumegroups: Details
sudo vgdisplay -v
# List volumegroups: Details about specific VG (for example "debian-vg")
sudo vgdisplay debian-vg
# Create new volumegroup
sudo vgcreate vgname /dev/sdb
# Add physical volume to volumegroup
sudo vgextend vgname /dev/sdc
# Remove physical volume from group
sudo vgreduce vgname /dev/sdc
Create Logical Volume #
# Create LV: storage in GB
sudo lvcreate -n lvname --size 5G vgname
# Create LV: storage in GB / different notation
sudo lvcreate -n lvname -L 5G vgname
# Create LV: storage in %
sudo lvcreate -n lvname -l 100%FREE vgname
# Create ext4 Filesystem
sudo mkfs.ext4 /DEV/path/to/LV
Delete Logical Volume #
# Deactivate logical volume
sudo lvchange -an vgname/lvname
# Check status / verify status
sudo lvscan
# Delte LV / Remove logical volume from VG
sudo sudo lvremove vgname/lvname
Extend Logical Volume #
# Extend LV
sudo lvextend -l +20G /DEV/path/to/LV
# Check new size of LV
lsblk
# Extend Filesystem
sudo resize2fs -p /DEV/path/to/LV
# Check size of Filesystem
df -h
Activate / Deactive LV #
# Check logical volume status
sudo lvscan
# Activate logical volume
sudo lvchange -ay /dev/ubuntu-vg/esxi1-lv
# Deactivate logical volume
sudo lvchange -an /dev/vgname/lvname
Example: Extend LV & Filesystem together #
Check available LVM storage with pvs
The output should look like this:
PV VG Fmt Attr PSize PFree
/dev/vda3 ubuntu-vg lvm2 a-- 18.22g 8.22g
Use df -h
to find dev-mapper name of the LV:
Output:
/dev/mapper/ubuntu--vg-ubuntu--lv 9.8G 7.7G 1.6G 83% /
Extend LV and filesystem together with -r
option:
# Extend LV and filesystem: For example +8GB
sudo lvextend -r -L +8G /dev/mapper/ubuntu--vg-ubuntu--lv
# Extend LV and filesystem: For example 100% of available VG storage
sudo lvextend -r -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
Example: Remove LV #
Use df -h
to find dev-mapper name of LV, Output:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-ubuntu--lv 9.8G 7.7G 1.6G 83% /
Unmount and remove the logical volume:
sudo lvremove /dev/mapper/ubuntu--vg-ubuntu--lv
Don’t try with the root partition ;)
LVM Snapshot #
# Create LVM Snapshot: Syntax
sudo lvcreate --size [SnapshotSize] --snapshot --name [SnapshotName] [VolumeGroupName]/[OriginalLogicalVolumeName]
# Create LVM Snapshot: Example
sudo lvcreate --size 5G --snapshot --name root-snapshot debian-vg/root
# Remove the Snapshot: Syntax
sudo lvremove [VolumeGroupName]/[SnapshotName]
# Remove the Snapshot: Example
sudo lvremove debian-vg/root-snapshot
-
--size
The size specified depends on the expected changes to the original volume during the snapshot’s life. It doesn’t need to be as large as the original volume unless you expect the total changes to match the original volume’s size. -
--snapshot --name
Define a name for the snapshot
Note: LVM does not directly support overwriting an existing snapshot. You have to manually remove the old snapshot before creating a new one with the same name.
Software RAID: Mdadm #
Create RAID #
sudo apt install mdadm |
Install mdadm utility |
sudo mdadm --detail --scan |
List available RAIDS |
Disks | |
lsblk |
List available block devices |
mdadm --examine /dev/sdb /dev/sdc |
Examine disks for any existing RAID blocks |
mdadm: No md superblock detected on /dev/sdb | Output |
mdadm: No md superblock detected on /dev/sdc | Output |
Create RAID | |
mdadm [mode] <raiddevice> [options] <component-devices> |
Syntax |
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb /dev/sdc |
Create RAID 1 |
cat /proc/mdstat |
Check if raid was created |
watch -n1 cat /proc/mdstat |
Monitor building process |
Verify RAID | |
mdadm --detail /dev/md0 |
Check status (active sync) |
Save RAID | |
mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf |
Save the Array Layout, array gets reassembled automatically at boot |
update-initramfs -u |
Update the initramfs |
Delete RAID #
df -h
Find mountpoint of RAID, for example:
/dev/md0 7.3T 262G 6.6T 4% /mnt/raid5_md0
Unmount | |
umount /mnt/raid5_md0 |
Unmount RAID |
df -h |
Check if RAID is unmounted |
vi /etc/fstab |
Remove RAID mount references0 |
Stop RAID | |
mdadm --stop /dev/md0 |
Stop RAID |
mdadm: stopped /dev/md0 |
Shell output |
Delete RAID | |
mdadm --zero-superblock /dev/sdb |
Remove RAID metadata / configuration from disks |
mdadm --zero-superblock /dev/sdc |
Repeat for all RAID disks |
cat /proc/mdstat |
Check for active RAID / RAID is removed |
Edit mdadm.conf | |
vi /etc/mdadm/mdadm.conf |
Remove the array definition |
ARRAY /dev/md0 ... |
Delete the following line |
Update initramfs | |
update-initramfs -u |
Save system configuration changes |
Encrypt RAID #
apt install cryptsetup |
Should aleady be installed |
cryptsetup luksFormat /dev/md0 |
Format with LUKS encryption |
YES |
Confirm to remove all data |
Enter passphrase for /dev/md0 | Enter passpharase |
cryptsetup luksOpen /dev/md0 md0_enc |
Initialize encrypted volume |
Enter passphrase for /dev/md0 | Enter passpharase |
LVM example #
pvcreate /dev/mapper/md0_enc |
Create LVM PV |
pvs |
Check PV |
vgcreate md0_enc_vg /dev/mapper/md0_enc |
Create LVM VG |
vgs |
Check VG |
lvcreate -n md0_enc_lv1 -l 100%FREE md0_enc_vg |
Create LVM LV, allocate 100% |
lvcreate -n md0_enc_lv1 -L 100G md0_enc_vg |
Create LVM LV, allocate 100G |
lvs |
Check LV |
mkfs.ext4 /dev/md0_enc_vg/md0_enc_lv1 |
Create ext4 filesystem |
mount /dev/md0_enc_vg/md0_enc_lv1 /mnt/md0 |
Mount filesystem |
/dev/raid1-vg1/raid1-lv1 /mnt/md0 ext4 defaults 0 0 |
Add entry to fstab |
Restore RAID #
Use the following Commands to gather the necessary RAID details:
lsblk |
List hard drive details |
sudo mdadm --detail --scan |
List available RAIDS |
cat /proc/mdstat |
List RAID details / drives |
Note: The following commands must be conducted for all RAIDS that include a disk partition, in this ecample sdb1, sdb2, sdb3…
# Verify the RAID / disk status
sudo mdadm --detail /dev/md1
# Mark the failed drive status as failed:
sudo mdadm --manage /dev/md1 --fail /dev/sdb1
# Verify status
cat /proc/mdstat
# Remove the failed drive from RAID:
sudo mdadm --manage /dev/md1 --remove /dev/sdb1
Physically replace the failed drive
# Find new disk
`lsblk`
# Copy partition table to new disk
sudo sfdisk -d /dev/sda | sudo sfdisk /dev/sdb
#Add the new drive to the RAID array
sudo mdadm --manage /dev/md1 --add /dev/sdb
# Monitor the RAID recovery
watch cat /proc/mdstat
# Verify the RAID status
sudo mdadm --detail /dev/md1
Troubleshooting #
I recently had the case that as a result of an unexpected power outage, followed by a faulty hard drive and server reboots, a RAID configuration changed from RAID 6 to RAID 0.
Here are the steps to solve the problem. First compare the RAID details to the RAID disk details, in my case the RAID disks had 4 RAID devices defined, but the RAID itself only 3.
mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Sep 9 10:50:37 2021
Raid Level : raid6
Array Size : 3906758656 (3725.78 GiB 4000.52 GB)
Used Dev Size : 1953379328 (1862.89 GiB 2000.26 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
...
mdadm --examine /dev/sda2
/dev/sda2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x1
Array UUID : fc3e9d73:f9fc83e6:823fa4a4:2f50be51
Name : ubuntu-server:0
Creation Time : Thu Sep 9 10:50:37 2021
Raid Level : raid6
Raid Devices : 4
...
Change the RAID Configuration:
# Stop the RAID
mdadm --stop /dev/md0
# Open MDADM Configuration File
vi /etc/mdadm/mdadm.conf
# Add the correct RAID and Device definition to the specific RAID definition
level=raid6 num-devices=6
# For Example:
ARRAY /dev/md0 metadata=1.2 name=ubuntu-server:0 UUID=fc3e9d73:f9fc83e6:823fa4a4:2f50be51 level=raid6 num-devices=4
# Assemble the RAID
mdadm --assemble --scan
# Save the changed RAID configuration to take effect at boot time
update-initramfs -u -k all
Swapfile #
List Swapfile | |
sudo swapon --show |
List configured swap files |
cat /proc/swaps |
List all Swapfiles |
free -h |
List available RAM and Swap |
Disable & Remove Swapfile | |
sudo swapoff /swapfile |
Disable Swapfile (with path) |
sudo rm/swapfile |
Remove Swapfile (with path) |
vi /etc/fstab |
Remove Swapfile entry in fstab |
# /swap.img none swap sw 0 0 |
Comment or remove entry |
Create Swapfile | |
sudo fallocate -l 2G /swapfile |
Create Swapfile (2G) |
ls -lh /swapfile |
Verify amount of space of Swapfile |
sudo chmod 600 /swapfile |
Change permissions / root access only |
ls -lh /swapfile |
Verify the permissions |
sudo mkswap /swapfile |
Mark the file as swap space |
sudo swapon /swapfile |
Enable the Swapfile |
sudo swapon --show |
Verify the Swapfile |
Make Swap permanent / Add entry in fstab | |
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab |
Add entry |
Smartmontools #
Smartmontools allows to monitor and check the health of hard drives and SSDs.
Overview #
# Install Smartmontools / SmartCTL
apt install smartmontools -y
# Check Status
systemctl status smartd
# Restart Smartmontools
systemctl restart smartd
# Open Smartmontools Configuration File
vi /etc/smartd.conf
# Help: Smartmontools
man smartctl
# Help: Smartmontools Configuration File
man smartd.conf
Find Devices #
# List all storage devices
smartctl --scan
# List block devices (could be helpful)
ls /sys/block
Check Devices #
# List: Device model, serial number
smartctl -i /dev/sda
# Health Test: Outputs only Passed / Failed
smartctl -H /dev/sda
#Diagnose potential device issues
smartctl -a /dev/sda
Example #
Example for hard drive attribute values.
smartctl -a /dev/sda | less
Watch out for the following arrtributes from the console output:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
...
5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0
...
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
Configuration File #
Example for Smartmontools Configuration File.
List disks with physical connection path:
ls -la /dev/disk/by-path/
# Open Smartmontools config
vi /etc/smartd.conf
# Example: Scan all Devices (not recommended)
DEVICESCAN -a
# Example: Bulk Config
DEVICESCAN -d removable -n standby -m root -M exec /usr/share/smartmontools/smartd-runner -s (S/../.././02|L/../../5/18:007)
# Example: Single Disk
/dev/disk/by-path/pci-0000:00:11.4-ata-1 -m root -M exec /usr/share/smartmontools/smartd-runner -s (S/../.././02|L/../../5/18)
Disk speed #
The following command tests the disks read speed, the --direct
option is used
to bypass the kernal cache and read directly from the disk:
sudo hdparm -tT --direct /dev/sda
NFS Export #
Installation & Status #
# Update package manager
sudo apt update
# Install NFS
sudo apt install nfs-kernel-server
# Check status
sudo systemctl status nfs-server
NFS Export / Shares #
# Edit NFS shares
sudo vi /etc/exports
# Restart NFS Server
sudo systemctl restart nfs-server
# Reload config without stopping NFS server
sudo systemctl reload nfs-server
# Re-export all NFS shares
sudo exportfs -r
Examples: Permissions #
# Export read only
/srv/nfs/nfs_share 192.168.60.101(ro,sync)
# Export read and write mode
/srv/nfs/nfs_share 192.168.60.101(rw,sync)
# Export with sudo permissions for NFS client
/srv/nfs/nfs_share 192.168.60.101(rw,sync,no_root_squash)
Examples: Clients #
# Export to all hosts
/srv/nfs/nfs_share *(sync)
/srv/nfs/k8s_share *(rw,sync,no_subtree_check,no_root_squash)
# Export to single host
/srv/nfs/nfs_share 192.168.60.101(sync)
# Export to multiple hosts
/srv/nfs/nfs_share 192.168.60.101(sync)
/srv/nfs/nfs_share 192.168.60.102(sync)
/srv/nfs/nfs_share 192.168.60.103(sync)
# Export to IP range (long & short)
/srv/nfs/nfs_share 192.168.60.101/255.255.255.0(sync)
/srv/nfs/nfs_share 192.168.60.101/24(sync)
# Export to subdomains (with wildcard)
/dump/backups *.jklug.work(sync)
NFS Mount on Client #
Install NFS Client #
# Install NFS Client
sudo apt install nfs-common -y
Verify the NFS connectivity #
# Find the showmount bin
find / -name showmount 2>/dev/null
# Verify that the NFS server is correctly configured
/usr/sbin/showmount -e NFS-server-IP
Mount NFS Share #
# Create directory for NFS export
mkdir -p /mnt/mountpoint
# Mount NFS export (verbose)
mount -t nfs -vvvv IP:/srv/nfs/nfs_share /mnt/mountpoint
# Mount with specific NFS version
mount -o nfsvers=4 IP:/srv/nfs/nfs_share /mnt/mountpoint
# Mount with read only permissions
mount -o ro IP:/srv/nfs/nfs_share /mnt/mountpoint
# Auto-detect file system / default option rw
mount IP:/srv/nfs/nfs_share /mnt/mountpoint
# List all currently mounted NFS exports
mount -t nfs
# List all available NFS exports for the client
showmount -e
Note: /mnt/
is a traditional temporary mount point.
Permanently mount with fstab #
# Open fstab
sudo vi /etc/fstab
# Add entry for NFS share
IP:/srv/nfs/nfs_share /mountpoint nfs defaults 0 0
# Mount NFS export
sudo mount /mountpoint
Optional: Specify NFS Version #
Open /etc/nfs.conf
uncomment and define specific NFS version with “y” and “n”:
[nfsd]
# debug=0
# threads=8
# host=
# port=0
# grace-time=90
# lease-time=90
# udp=n
# tcp=y
vers2=n
vers3=n
vers4=y
vers4.0=y
vers4.1=y
vers4.2=y
# rdma=n
# rdma-port=20049
Samba #
Samba Commands | |
apt install samba -y |
Install Samba |
smbd --version |
Check version |
vi /etc/samba/smb.conf |
Edit config |
sudo service smbd restart |
Restart Samba service |
systemctl status smbd |
Samba service status |
systemctl start smbd |
Start Samba service |
systemctl stop smbd |
Stop Samba service |
systemctl enable smbd |
Enable service at boot |
systemctl disable smbd |
Disable service at boot |
testparm |
List Samba config |
Samba DB Commands | |
smbpasswd -a username |
Add Samba DB user / change PW |
smbpasswd -e username |
Enable Samba DB user |
smbpasswd -d username |
Disable Samba DB user |
smbpasswd -x username |
Remove Samba DB user |
pdbedit -L |
List Samba DB users |
sudo pdbedit -L | grep username |
Check if user exists in Samba db |
Share without credentials #
[sharename]
path = /share/path
browsable = yes
writable = yes
guest ok = yes
read only = no
Add the configuration to /etc/samba/smb.conf
.
The guest ok = yes
parameter enables to access the share without credentials with writable
permissions.
Share with credentials #
Configuration for specific users:
[sharename]
path = /share/path/username
browsable = yes #optional
read only = no
force create mode = 0660 #forces permissions for created files
force directory mode = 2770 #forces permissions for created folders
valid users = username1, username2
Add system user (without home directory):
adduser --no-create-home --shell /usr/sbin/nologin --ingroup sambashare username
Add system user (with home directory):
sudo adduser --home /share/path/username --no-create-home --shell /usr/sbin/nologin --ingroup sambashare username
Add user to Samba DB & enable user:
smbpasswd -a username && smbpasswd -e username
Define directory owner:
chown username:sambashare /share/path/username
Change permissions(2=inherit group ownership):
chmod 2770 /share/path/username
Restart Samba service:
sudo service smbd restart
Configuration for Samba group:
[sharename]
path = /share/path/all
browsable = yes #optional
read only = no
force create mode = 0660
force directory mode = 2770
valid users = @sambashare #all Samba users
Define directory owner:
chown :sambashare /share/path/all
Change permissions:
chmod 2770 /share/path/all
Restart Samba service:
sudo service smbd restart
Mount Samba Share #
Temporary mount Samba share
This should only be used for testing purposes, for best practice start the command with space, so that it does not show up in bash history:
mount -t cifs //IP/share /mount/path -o username=...,password=...
Mount Samba share with credential file
Create file for the user credentials:
vi /root/.sambacredentials
username=...
password=...
Change the file permissionsto owner read only:
chmod 400 /root/.sambacredentials
Test the mount:
mount -t cifs //IP/share /mount/path -o credentials=/root/.sambacredentials
Add entry to /etc/fstab
:
//IP/share /mount/path cifs vers=3.0,credentials=/root/.sambacredentials
Samba Share Firewall Rules #
sudo ufw allow 137/udp &&
sudo ufw allow 138/udp &&
sudo ufw allow 139/tcp &&
sudo ufw allow 445/tcp