Systems
Figuring out which device a packet went out on
In trying to triage a NFSv3 READDIR lack of response, I had to figure out which interface a packet was arriving at and which one it was being sent back on. My typical way to invoke tshark is the very simple:
tshark -i any -w /tmp/f.scp
Everyone else uses a pcap extension and for some reason, I find “f” as a perfectly fine name for a temporary packet capture I don’t plan to keep.
If I used this approach, I could see the READDIR packet arriving and departing, but I couldn’t tell which interface was being used. If I did this instead,
tshark -i bond0 -w /tmp/f.scp
I could see the READDIR call, but not the reply. I then proceeded to cycle through each and every interface, but no joy at finding the reply. (To be honest, I probably went too fast in my iteration and just missed it.)
I read (somewhere via Google foo) that it would print the interface the pack either arrived or departed on. So, I went to back to using “any” and checked:
Frame 72359: 198 bytes on wire (1584 bits), 198 bytes captured (1584 bits) on interface 1
Interface id: 1 (any)
Interface name: any
Not what I wanted. Somehow I figured out that I could use -i multiple times to specify the interfaces I wanted:
tshark -i bond1.80 -i bond0 -i eno1 -i bond1 -i bond1.100 -i bond1.2080 -w /tmp/bonds.scp
And then when I looked at the two packets of interest:
Frame 72359: 198 bytes on wire (1584 bits), 198 bytes captured (1584 bits) on interface 1
Interface id: 1 (bond0)
Interface name: bond0
...
Frame 72366: 1614 bytes on wire (12912 bits), 1614 bytes captured (12912 bits) on interface 4
Interface id: 4 (bond1.100)
Interface name: bond1.100
So now I can in a multi-homed system, I can figure out the interfaces on which packets are captured.
Filtering on NFSv3 procedures
I was asked to figure out why a NFSv3 server was not responding to READDIR requests. Note, I don’t know if this was READDIR or READDIRPLUS. I fired off tshark to capture packets:
tshark -i any -w /tmp/bonds.scp
Hmm, even when filtering on NFS, too many packets to examine (it is a very busy NFSv3 server):
NR_09-20:24:09 pixie ~ $ tshark -r /tmp/bonds.scp | wc -l
Running as user "root" and group "root". This could be dangerous.
140532
NR_09-20:29:13 pixie ~ $ tshark -r /tmp/bonds.scp -Y nfs | wc -l
Running as user "root" and group "root". This could be dangerous.
39350
I could use Wireshark, but nah!
I can use a better filter:
NR_09-20:31:46 pixie ~ $ tshark -r /tmp/bonds.scp -Y "nfs.procedure_v3 == 16 || nfs.procedure_v3 == 17" | wc -l
Running as user "root" and group "root". This could be dangerous.
21
This states to only filter if the NFSv3 procedure is either 16 or 17.
You can find the list of NFSv3 procedures at https://datatracker.ietf.org/doc/html/rfc1813#page-27.
3TB Disk
I bought an external 3TB disk and wanted to attach it to an old Airport Extreme. It wouldn’t see the disk. Turns out the disk is formatted for windows.
Okay, Diskutil to the rescue!
Not, I kept getting:
Running operation 1 of 1: Erase “Untitled”…
Unmounting disk
MediaKit reports not enough space on device for requested operation.
Operation failed…
A quick consultation of the Internet Oracle turned up:
MediaKit reports not enough space on device for requested operation.
Okay, but I wanted case sensitive, so:
loghyr:~ loghyr$ diskutil list
/dev/disk0 (internal, physical):
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *1.0 TB disk0
1: EFI EFI 209.7 MB disk0s1
2: Apple_APFS Container disk3 1000.0 GB disk0s2/dev/disk1 (internal, physical):
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *1.0 TB disk1
1: EFI EFI 209.7 MB disk1s1
2: Apple_APFS Container disk2 1000.0 GB disk1s2/dev/disk2 (synthesized):
#: TYPE NAME SIZE IDENTIFIER
0: APFS Container Scheme – +1000.0 GB disk2
Physical Store disk1s2
1: APFS Volume EVO1 703.1 GB disk2s1/dev/disk3 (synthesized):
#: TYPE NAME SIZE IDENTIFIER
0: APFS Container Scheme – +1000.0 GB disk3
Physical Store disk0s2
1: APFS Volume EVO2 157.6 GB disk3s1
2: APFS Volume Preboot 19.6 MB disk3s2
3: APFS Volume Recovery 506.6 MB disk3s3
4: APFS Volume VM 3.2 GB disk3s4/dev/disk4 (external, physical):
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *3.0 TB disk4
1: Apple_HFS 3.0 TB disk4s1loghyr:~ loghyr$ diskutil unmountDisk force disk4
Forced unmount of all volumes on disk4 was successfulloghyr:~ loghyr$ sudo dd if=/dev/zero of=/dev/disk4 bs=1024 count=1024
1024+0 records in
1024+0 records out
1048576 bytes transferred in 0.452265 secs (2318499 bytes/sec)
Here is where I differ from the link:
loghyr:~ loghyr$ diskutil partitionDisk disk4 GPT JHFSX MacBackAttack 0g
Started partitioning on disk4
Unmounting disk
Creating the partition map
Waiting for partitions to activate
Formatting disk4s2 as Mac OS Extended (Case-sensitive, Journaled) with name MacBackAttack
Initialized /dev/rdisk4s2 as a 3 TB case-sensitive HFS Plus volume with a 229376k journal
Mounting disk
Finished partitioning on disk4
/dev/disk4 (external, physical):
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *3.0 TB disk4
1: EFI EFI 209.7 MB disk4s1
2: Apple_HFS MacBackAttack 3.0 TB disk4s2
Getting static addresses in a Linux client under NAT and VMware Fusion
I had a client working fine enough with DHCP, but I really want to be able to
consistently ssh into it.
I looked at:
/Library/Preferences/VMware Fusion/vmnet8/dhcpd.conf
and determined that I did not have to modify it to get a static address:
allow unknown-clients; default-lease-time 1800; # default is 30 minutes max-lease-time 7200; # default is 2 hours subnet 172.16.249.0 netmask 255.255.255.0 { range 172.16.249.128 172.16.249.254; option broadcast-address 172.16.249.255; option domain-name-servers 172.16.249.2; option domain-name localdomain; default-lease-time 1800; # default is 30 minutes max-lease-time 7200; # default is 2 hours option netbios-name-servers 172.16.249.2; option routers 172.16.249.2; } host vmnet8 { hardware ethernet 00:50:56:C0:00:08; fixed-address 172.16.249.1; option domain-name-servers 0.0.0.0; option domain-name ""; option routers 0.0.0.0; }
I.e., I could use addresses 172.16.249.2 -> 172.16.249.127 for static assignment. (There is a bug in that statement, which is why I am writing this down.)
I always skip the first 20 addresses, so I assigned:
KinSlayer:flexfiles loghyr$ more /private/etc/hosts ## # Host Database # # localhost is used to configure the loopback interface # when the system is booting. Do not change this entry. ## 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 172.16.249.1 kinslayer 172.16.249.21 skull 172.16.249.22 kitty
skull to be 172.16.249.21.
I modified skull’s /etc/sysconfig/network:
[root@skull linux]# more /etc/sysconfig/network # Created by anaconda HOSTNAME=skull
and /etc/sysconfig/network-scripts/ifcfg-eno16777736
[root@skull linux]# more /etc/sysconfig/network-scripts/ifcfg-eno16777736 TYPE="Ethernet" BOOTPROTO="static" DEFROUTE="yes" IPV4_FAILURE_FATAL="no" IPV6INIT="yes" IPV6_AUTOCONF="yes" IPV6_DEFROUTE="yes" IPV6_PEERDNS="yes" IPV6_PEERROUTES="yes" IPV6_FAILURE_FATAL="no" NAME="eno16777736" UUID="3e93f225-d48a-4de0-919a-5ef5d1f428e7" ONBOOT="yes" HWADDR="00:0C:29:98:83:E7" PEERDNS="yes" PEERROUTES="yes" DEVICE=eno16777736 NM_CONTROLLED=no IPADDR=172.16.249.21 NETMASK=255.255.255.0 GATEWAY=172.16.249.1 DNS1=172.16.249.1
Disabled Network Mangler and turned on network:
service NetworkManager stop chkconfig NetworkManager off yum erase NetworkManager service network start chkconfig network on
I tested that I could ssh into and out of skull to my laptop. Fine, job done.
Only DNS wasn’t working the next day:
[root@skull linux]# more /etc/resolv.conf # Generated by NetworkManager domain localdomain search localdomain nameserver 172.16.249.1
I checked online, and found I should be using 172.16.249.2. Fine, job done.
Well then I couldn’t get to github.com port 22 to get a project update.
Push comes to shove, I should have not assumed that 172.16.249.1 is special
with this NAT. It is not the laptop as far as a DNS server and gateway is concerned.
So I changed this line in /etc/sysconfig/network-scripts/ifcfg-eno16777736:
GATEWAY=172.16.249.2
And restarted the network – now my DNS change was gone (why does service network restart add in the line about “# Generated by NetworkManager” to /etc/resolv.conf ??).
Fine, fixed this line as well:
DNS1=172.16.249.2
And restarted.
Now it all works, I think. 🙂
Getting mail clients to work with domains at Gmail
My work email is Thomas.Haynes@example.org and is actually maintained at gmail.com.
Both Mail.app and mutt have had a hard time configuring for it.
For Mail.app:
- Set it up as normal for a Google IMAP account.
- Then go to Mail -> Preferences, select the account.
- Then on the “Outgoing Mail Server (SMTP):”, select by left click the server
- and then “Edit SMTP Server List …”.
- Now, select the server again
- First you’ll want to change the “Description” to be “Example.org” (this is in the “Account Information”)
- Second you will want to select Advanced
- Third, change the “User Name:” from “First.Last@gmail.com” to be “First.Last@example.org”
It should work now
For mutt, I followed the directions at Consolify your Gmail with MUTT with the exception of the following line:
set smtp_url = "smtp://yourusername@smtp.gmail.com:587/"
I modified it to be:
set smtp_url = "smtp://First.Last@example.org@smtp.gmail.com:587/"
Pullme/Pushyou
Different git Push & Pull(fetch) URLs suggests:
[remote "origin"] fetch = +refs/heads/*:refs/remotes/origin/* url = git://github.com/chief/global.git pushurl = git@github.com:User/forked.git
Quick hack to rename all prefixed files in a directory to a new prefix
loghyr:xxx thomas$ ls -1 pnfswars* | sed ‘s/\(pnfswars\)\(.*\)/mv \1\2 lnfsreg\2/’ | sh
And to do this as a script:
loghyr:xxx thomas$ more reprefix.sh
#! /bin/sh
ls -1 ${1}* | sed ‘s/\(‘${1}’\)\(.*\)/mv \1\2 ‘${2}’\2/’ | sh