Steven Jos PhanCreative Director, Experience Designphansteve at gmail dot com
11 September 2024Setting up an internet host and firewall

Assignment # 1


Server Setup


  I chose to setup a server on DigitalOcean versus Heroku, Dreamhost or AWS although it was an arbitrary decision. I like the sound of DigitalOcean. I configured my server to be hosted geographically proximate to NYC as I expect most of my (hyper-minimal) traffic will be NY-based. There’s a chance I’ll repurpose this server for something else later but in any event, I think this is a reasonable base. 

Before firing up, DigitalOcean prompted me to select an OS; I chose the default Ubuntu 24.04, a Linux-based OS. There were a couple other small settings decisions to be made before hitting GO. From there, I needed to wait until I received the configuration and login details from DigitalOcean to proceed any further. Strangely that message never arrived in my inbox so I used the in-browser terminal shell on DigitalOcean’s site to gain root access to my server. There I was able to setup a new user to avoid logging in remotely as a root superuser moving forward – I understand that can be a security risk. When logging in as this new user on my laptop terminal shell (ssh newUsername@serverIP) things went as planned.  

Once I had that new user setup, I used the following code to grant this user sudo permissions: 

   sudo adduser YOUR_USERNAME sudo

This locked the root user password and logged out as root

    sudo passwd -l root
    logout

At that point, I logged back in as the new user
   ssh YOUR_USERNAME@YOUR_IP_ADDRES
To upgrade the Linux Os of this fresh install, I used the apt tool (Advanced Packet Tool). This is best practice on a new or seldom used Linux OS,

   sudo apt update

Then I upgraded any software needing it. 
    sudo apt upgrade

Critical for any web server is a firewall to protect it. I opted for UFW (Uncomplicated Firewall). 

    sudo apt install ufw
Installed some network tools. 
    sudo apt install net-tools
A useful tool for applications like this is ifconfig which allows you to see the status of any network interfaces on the server. 

   ifconfig

I got a response that outlined the configuration for each of your network interfaces, listing the internet address, the MAC address, and much more. Looks like there’s two ethernet connections and something called a loopback. Some additional research showed that this is used for communication within the same machine. It allows the system to send network traffic to itself and is primarily used for testing network services locally. For example, if you’re running a web server, you can access it on your own machine using http://127.0.0.1. Devs and sys admins use it to test applications or services without needing a network connection.
The loopback interface is always “up” (active) by default on most systems. It doesn’t require any physical hardware like a network card because it operates within the system itself. This also means that it isn’t able to support external connections to networks or devices.


 


I installed node.js for future server-side development work – something I’m really looking forward to. 
    sudo apt install nodejs
From here I jumped into some config of my firewall. 

Firewall Setup

I had previously installed the firewall during my initial setup of the server. Now I needed to configure it to function as I need it. These self-explanatory commands permit and deny certain types of connections. 

    $ sudo ufw default allow outgoing
    $ sudo ufw default deny incoming
I enabled TCP connections on port 22, the default ssh port. 
    $ sudo ufw allow ssh
Because I plan to use this as a web server, I enabled HTTP and HTTPS protocols. In addition to the application (http and https), I enabled the transport protocols as well. 

    $ sudo ufw allow http/tcp
    $ sudo ufw allow https/tcp
I enabled settings for typical node.js development. This allows for custom server development and blocks ports that might make me vulnerable to an attack like flooding UDP packets through my open HTTP ports. 

    $ sudo ufw allow 8080/tcp
    $ sudo ufw allow 8081/tcp
Finally, I fired up the firewall.  
    $ sudo ufw enable
I was able to check the status of the firewall with the following command. 
    $ sudo ufw status

All seemed good! 



After I had the firewall up and running for some hours, I kept checking back to see if my logs had any activity. No dice! After some troubleshooting, I realized that I never rebooted my server after installing the firewall. I think this prevented it from doing it’s thing. That did the trick!

Firewall Logs

I was really impressed at some of the data that I was able to observe when pulling my log file! Here’s the command I used to get access to the logs. Something to note is that I’ve already incorporated some extra commands intended to replace spaces (\s) with tabs (\t). This enables the output of the program to be copied into a spreadsheet with ease. Without using the sed command, the output would have only spaces delineating the individual fields of data and a simple copy/paste into a spreadsheet wouldn’t work. 

    $ sudo cat /var/log/ufw.log | sed -e 's/\s/\t/g'

It looks like this except there are many many more screens to scroll through beneath.
 


The end result of the copy/paste into a spreadsheet is visible here. I’m finding that the timestamps in the file are listed as UTC (Coordinated Universal Time, which is the same as GMT+0 and +4 hours ahead of EST (currently, bearing in mind Daylight Savings Time). 

Some stats; 

  • There were 2389 attempts to connect to my server in the 13 hours that I’ve had it up and running.
  • Of those, 1274 attempts were unique IP addresses.
  • 15 separate IP addresses attempted to connect more than 10 times.
  • The IP address 79.110.62.66 tried to connect a total of 385 times!
  • Let’s dig into what we can learn about that IP address: 




Seems like Colin Brown is in Amsterdam, Netherlands and their service provider is Emanuel Hosting based in London, UK. I could probably go deeper on this....

    inetnum: 79.110.62.0 - 79.110.62.255
    netname: ColinBrown
    org: ORG-EL451-RIPE
    country: GB
    admin-c: SH16229-RIPE
    tech-c: SH16229-RIPE
    mnt-routes: EmanuelHostingLTD-mnt
    mnt-domains: EmanuelHostingLTD-mnt
    status: ASSIGNED PA
    mnt-by: MNT-NETIX
    mnt-by: MNT-NETERRA
    created: 2024-04-19T07:15:15Z
    last-modified: 2024-04-19T07:15:15Z
    source: RIPEorganisation: ORG-EL451-RIPE
    org-name: Emanuel Hosting Ltd.
    country: GB
    org-type: OTHER
    address: 26 New Kent Road, SE1 6TJ London, England
    abuse-c: ACRO54984-RIPE
    mnt-ref: MNT-NETERRA
    mnt-by: EmanuelHostingLTD-mnt
    created: 2023-12-14T15:27:18Z
    last-modified: 2024-08-09T17:20:39Z
    source: RIPE # Filtere

If I keep tracking this data over time, I might be able to learn more about habits of the transgressors. I might notice trends around times of day that seem really prone to connection attempts (or dare I say malicious attacks?) It’d be cool to become familiar with how and why these attempts are coming in in the way that they are. I’m in to the digital forensics! 






  


19 September 2024Traceroutes

Assignment # 2


This week I’m digging into some investigative work to see how my typical internet traffic is routed across various networks globally on its . I’ll be using the traceroute 

I started off using Google’s Takeout function to download my browsing history. It arrived as a json with each entry formatted as follows: 


   {
   
               "page_transition": "LINK",
               "title": "Frida Kahlo - Wikipedia",
               "ptoken": {},
               "url": "https://en.m.wikipedia.org/wiki/Frida_Kahlo",
               "client_id": "3RKMQ6PmBJ0aQXnLe/7Jcg==",
               "time_usec": 1726714407058954
},



In running the following line in terminal, This extracted all the lines that contain "url": and filtered them further to only include URLs that begin with http. The filtered URLs were saved to the file http.txt.


cat BrowserHistory.json | grep ' \"url":' | grep '\"http' > http.txt

It still needed a bit of formatting before I could usefully work with it in a spreadsheet. I opened up the new resulting file to format each entry from this: 

    "url": "https://en.m.wikipedia.org/wiki/Frida_Kahlo"

to this: 

   https://en.m.wikipedia.org/wiki/Frida_Kahlo,

There’s probably a better way to do this (via command line) but the Search and Replace function in VS Code worked decently well. 

Now I was able to copy my extremely large text file into Google Sheets. 

This next command was really impressive. It eliminated duplicate entries and instead gave me a frequency count for every site in the list. 

   cat http.txt | sort | uniq -c | sort -nr > visited_most.txt

Based on that, I identified the top ten sites so I could do a deeper dive on those in particular. Those sites are as follows: 

   2568         https://mail.google.com/mail/
   519          https://itp.nyu.edu
   421          https://www.youtube.com
   197          https://keepersecurity.com/vault
   190          https://www.figma.com/design/
   182          https://www.google.com/maps
   175          https://www.nytimes.com
   171          https://secure.chase.com
   149          https://github.com
   146          https://docs.google.com/spreadsheets


I created a new txt file and saved it as topten.txt. Then I tried this line: 


    cat topten.txt | nslookup >> mostvisitedips.txt    

It didn’t work as planned. All the results in the txt file looked like this: 

    ** server can't find https://mail.google.com/mail/: NXDOMAIN
   Server:        100.64.0.2
   Address:    100.64.0.2#53


I used ChatGPT to help me write a new command that would strip the parts of the URL away so I could submit the domain names only. I assume that was my problem. Here’s the new command I ended up trying: 

    cat topten.txt | sed -E 's#(https?://)?([^/]+).*#\2#' | nslookup >> mostvisitedips.txt

Results!

   Non-authoritative answer:
   Name:    mail.google.com
   Address: 142.251.32.101
   Server:        100.64.0.2
   Address:    100.64.0.2#53

   Non-authoritative answer:
   Name:    itp.nyu.edu
   Address: 128.122.120.76
   Server:        100.64.0.2
   Address:    100.64.0.2#53

   Non-authoritative answer:
   youtube.com canonical name = youtube-ui.l.google.com.
   Name:    youtube-ui.l.google.com
   Address: 142.251.32.110
   Name:    youtube-ui.l.google.com
   Address: 142.251.35.174
   Name:    youtube-ui.l.google.com
   Address: 142.251.40.110
   Name:    youtube-ui.l.google.com
   Address: 142.251.40.142
   Name:    youtube-ui.l.google.com
   Address: 142.251.40.174
   Name:    youtube-ui.l.google.com
   Address: 142.250.80.46
   Name:    youtube-ui.l.google.com
   Address: 142.250.80.78
   Name:    youtube-ui.l.google.com
   Address: 142.250.80.110
   Name:    youtube-ui.l.google.com
   Address: 142.250.176.206
   Name:    youtube-ui.l.google.com
   Address: 142.251.40.206
   Name:    youtube-ui.l.google.com
   Address: 142.251.40.238
   Name:    youtube-ui.l.google.com
   Address: 142.251.41.14
   Name:    youtube-ui.l.google.com
   Address: 142.250.65.174
   Name:    youtube-ui.l.google.com
   Address: 142.250.65.206
   Name:    youtube-ui.l.google.com
   Address: 142.250.65.238
   Name:    youtube-ui.l.google.com
   Address: 142.250.81.238
   Server:        100.64.0.2
   Address:    100.64.0.2#53

   Non-authoritative answer:
   Name:    keepersecurity.com
   Address: 34.194.152.47
   Name:    keepersecurity.com
   Address: 100.25.27.45
   Server:        100.64.0.2
   Address:    100.64.0.2#53

   Non-authoritative answer:
   Name:    www.figma.com target="_blank">www.figma.com target="_blank">www.figma.com target="_blank">www.figma.com
   Address: 18.239.183.123
   Name:    www.figma.com
   Address: 18.239.183.35
   Name:    www.figma.com
   Address: 18.239.183.82
   Name:    www.figma.com
   Address: 18.239.183.70
   Server:        100.64.0.2
   Address:    100.64.0.2#53

   Non-authoritative answer:
   Name:    www.google.com
   Address: 192.0.0.88
   Server:        100.64.0.2
   Address:    100.64.0.2#53

   Non-authoritative answer:
   nytimes.com   canonical name = www.prd.map.nytimes.com. target="_blank">www.prd.map.nytimes.com.
   www.prd.map.nytimes.com    canonical name = www.prd.map.nytimes.xovr.nyt.net. target="_blank">www.prd.map.nytimes.xovr.nyt.net.
   www.prd.map.nytimes.xovr.nyt.net    canonical name = nytimes.map.fastly.net.
   Name:    nytimes.map.fastly.net
   Address: 151.101.45.164
   Server:        100.64.0.2
   Address:    100.64.0.2#53

   Non-authoritative answer:
   secure.chase.com    canonical name = gtm.secure.chase.com.akadns.net.
   gtm.secure.chase.com.akadns.net    canonical name = secure.chase.com.edgekey.net.
   secure.chase.com.edgekey.net    canonical name = e251998.a.akamaiedge.net.
   Name:    e251998.a.akamaiedge.net
   Address: 23.44.111.78
   Name:    e251998.a.akamaiedge.net
   Address: 23.44.111.79
   Name:    e251998.a.akamaiedge.net
   Address: 23.44.111.46
   Name:    e251998.a.akamaiedge.net
   Address: 23.44.111.74
   Name:    e251998.a.akamaiedge.net
   Address: 23.44.111.82
   Name:    e251998.a.akamaiedge.net
   Address: 23.44.111.73
   Name:    e251998.a.akamaiedge.net
   Address: 23.44.111.70
   Name:    e251998.a.akamaiedge.net
   Address: 23.44.111.45
   Name:    e251998.a.akamaiedge.net
   Address: 23.44.111.49
   Server:        100.64.0.2
   Address:    100.64.0.2#53

   Non-authoritative answer:
   Name:    github.com
   Address: 140.82.114.4
   Server:        100.64.0.2
   Address:    100.64.0.2#53

   Non-authoritative answer:
   Name:    docs.google.com
   Address: 142.251.40.238


Most traceroutes to NY Times, NYU, Google, Figma and other big companies were kind of boring. Google has a massive presence in NYC so my traceroutes from Brooklyn to Manhattan weren’t anything to note. Others went straight to the Bay Area. I did find some cool things to report on. 

I used Traceroute Mapper to easily visualize the routing after performing a traceroute in terminal. Here’s my traceroute map to Keeper, my password manager. Github also went to this same geographic area with a couple hops in this immediate vicinity. Northern Virginia has massive amounts of data centers so this makes sense. 




I decided to go back into my browsing history and choose something I suspect is not domesticaly hosted. I chose louisiana.dk, the museum of art in Denmark. See below. At least I was able to get a traceroute out of the continent! Maybe I’m a really boring web browser. I think I should try a random sampling of websites next time. I think I’m a pretty global web browser but the top ten sites I visit are clearly utilities that I use which all have servers super proximate to my physical location. Still good learnings! 

 

Here’s the actual traceroute information to louisiana.dk

   traceroute 35.214.224.76
   traceroute to 35.214.224.76 (35.214.224.76), 64 hops max, 40 byte packets
    1  10.5.0.1 (10.5.0.1)  12.524 ms  14.096 ms  9.878 ms
    2  * * *
    3  ae0-100.cr2-nyc2.ip4.gtt.net (76.74.37.181)  25.060 ms * *
    4  * * *
    5  * * *
    6  * * *
    7  * 76.224.214.35.bc.googleusercontent.com (35.214.224.76)  86.912 ms





26 September 2024Phreaking Game Controller

Assignment # 3



For this assignment, I decided to go back in time and play with some OG hacker tech: DTMF, which stands for dual-tone multi-frequency which was the underlying technology behind early phone telecom. Phreaking was the term used by those who played around with this stuff beginning in the 60s.

The above diagram shows the systems diagram that eventually ended up working. I tried and failed a lot in this process but I’m thrilled that I figured out something that works. I wish I had an analog phone to use as my interface but they aren’t too common these days! 

Overall it took me about 25 hours to learn enough about DTMF, VoIP, tunnels, servers, ports, tcp/http connections, node.js and more and at the end of it, I know a lot more than when I started but I doubt my game controller will be the most effective interface to win this game! 

ChatGPT was instrumental in helping me plot the systems diagram and work with node.js, which was new to me. It also pointed me towards some of the critical implementations that were necessary to complete this like Vonage, my VoIP service and Ngrok, my tunnel provider which gave me a secure way to access my remote server. 

I definitely had my fair share of trouble in implementing this whole system. Going into this project, I knew the project seemed feasible but I had no idea exactly what I would need to make it all work. I started off researching VoIP and signed up for a free Vonage account. I ended up putting a few dollars into the account because I assumed it was one reason why my initial attempts were failing but I’m not actually sure that was necessary. Vonage’s documentaiton was really helpful and straightforward but they defitely glossed over some important details about how and where to store the webhooks applications on my remote server. Experimenting with webhooks and node.js for the first time was a challenge but ChatGPT was critical in helping along the way. Ngrok was suggested by both Vonage and ChatGPT as the favored way to setup a TCP or HTTP tunnel from my remote server to my local client and I *think* it was necessary to use but I’m not entirely positive. In lieu of ngrok, I think I could have setup my VoIP webhook server on my local computer but that felt like a less resilient and perhaps more dangerous solution. As I was configuring webhooks across various servers, I realized I don’t know very much about opening ports and how to protect one’s self from potential bad actors. Mental Note to self: definitely research this and potentially close up ports behind me after this project. I think I was diligent to only open ports that I needed but even then, how can I ensure those ports are safe to have open? 

Below are some images that help show the various moving parts in motion. This first image shows the raw json data coming back. This was pulled from my remote server which I was SSH’d into to run the webhook from the VoIP service to remote server.  

 
 
The below image shows the status of ngrok, the tunnel connecting the remote server and my local client.  



Hopefully the phone dialer on iPhone is a decent interface to use to control the game! I suspect physical buttons like on an analog phone would be more optimal but I’ll have to make do. 

Here’s the github code and documentation for this project. It contains both the remote server-side webhook code and the local client webhook code. There’s a decent amount of configuration that I had to do on both Vonage’s site and Ngrok’s site too and some reference for that code and configuration is here: Vonage Interactive VoIP Menus, Vonage NCCO configuration and ngrok tunneling reference.  










17 October 2024Nginx on a Remote Server

Assignment # 4


The assignment for this week was to set up a web server on the remote server that I had spun up in week two, in my case, with Digital Ocean. Following these instructions from Digital Ocean to get NGINX setup was super straightforward. 


Nginx is “a high-performance, open-source web server and reverse proxy server that is also commonly used as a load balancer, HTTP cache, and mail proxy. It was originally designed as a web server to handle high concurrency, but over time it has evolved into a powerful multipurpose server.” It’s awesome, simple to use and free. 


In the past I’ve registered domains and set up websites on platforms like Cargo or Squarespace but I’ve never hosted any of them on my own server so this is a first for me. I actually take that back – I did the registration and configuration for my parents’ business site in the late 90s. Wildly, I don’t think too much has changed in terms of the domain registration and hosting process since then. So far, it’s pretty manageable especially with the platform documentation to reference. Two things that are critical to handle on a self-hosted page are HTTPS encryption and SSL certificates. Even though I won’t be transmitting any sensitive information through this site it’s still a good practice. 



HTTPS (Hypertext Transfer Protocol Secure) is a secure version of HTTP, which encrypts data transferred between a user’s browser and a website using SSL (Secure Sockets Layer) or its successor TLS (Transport Layer Security). SSL/TLS ensures that sensitive information like passwords and credit card numbers is encrypted, providing authentication and data integrity to protect against eavesdropping and tampering.


The domain that I used in this setup process was one that I had previously registered with Namecheap and have been planning to direct to this server for some time now. This part was somewhat tricky because I had to coordinate settings across a few separate platforms namely Namecheap, Digital Ocean and the Certificate Authority. I definitely got tripped up a bit which meant that this setup went from something I thought would be done in an afternoon to something that took me several days to complete albeit with a huge gap of time in the middle. Here’s the configuration settings that I input into Namecheap:





On the Digital Ocean side of things, I configured as follows: 




Through configuring both of these in this way, any traffic to my domain, serverfrom.earth, will be routed through any of ns1.digitalocean.com, ns2.digitalocean.com, or ns3.digitalocean.com. ChatGPT says that the reason why I need all three is: 


1. Redundancy and Reliability:

Having multiple nameservers ensures that if one server is down or unavailable, the others can still respond to DNS queries. This improves the reliability of your DNS resolution. By configuring multiple nameservers, you reduce the chances of DNS failure affecting your website’s availability.


2. Load Balancing:

With multiple nameservers, DNS queries can be distributed across the different servers, preventing any single server from becoming overwhelmed with traffic. This helps balance the load and ensure that queries are processed quickly.


3. Geographical Distribution:

Nameservers are often distributed across different geographic regions. By using all three nameservers (ns1, ns2, and ns3), you’re ensuring that DNS queries are resolved more quickly and reliably for users around the world, as their requests are routed to the nearest available nameserver.


4. Failover Protection:

If a nameserver becomes unavailable due to maintenance or an outage, the other nameservers can still resolve DNS queries. This failover protection helps maintain the accessibility of your domain even in the event of server issues.

Helpful to know. When you plugin a request to a domain, a sequence of events happens behind the scenes to translate that domain into an IP address and route your request to the correct server. I just learned that the browser on the client side checks to see if it can resolve the domain request into an IP address from it’s own cache first. If not, then it’ll check a local OS cache. If no cached IP address is to be found locally, the browser contacts a DNS resolver, usually provided by an ISP or a third-party DNS service (e.g., Google DNS, Cloudflare). The DNS resolver is responsible for finding the IP address associated with the domain name. With network traffic redirects and DNS resolving – essentially the inner workings of network traffic – I’m curious what the motivations of these service providers are. It’s crazy to me how it all comes together and how/why the individual participants choose to pitch in. I If all previous efforts faile, the last ditch effort to resolve the IP involves the client’s browser recursively querying other DNS servers to see if it can resolve an IP address. 

Instructions warned that DNS mapping could take 24-48 hours after configuration so I waited days to see if anything would happen. Plus I waited that much time PLUS a whole additional day over a US holiday. I seriously doubted the holiday would matter but I wasn’t positive. Anyway, after waiting all that time, DNS requests still weren’t forwarding to my IP so I went back in and tweaked things.


The tweak that finally solved my issue was in not explicitly configuring in Digital Ocean that I wanted DNS requests to map to my IP address. The configuration that I shared above wasn’t enough and I needed to add in these as well: 






At this point, I was able to move forward with my certificate registration and HTTPS setup. Almost instantaneously, domain requests were handling properly! I could see the Nginx boilerplate page when I visited my URL. 


Next step was to upload a holding page in place of this Nginx holding page. It took me a bit of time to understand where and how to upload my HTML, JS and CSS files to my server. I had never worked with Nginx and I wasn’t sure where exactly it expected these files to be stored on my web server. I searched for an “index.html” file on the server through the CLI as I hoped I could replace the boilerplate Nginx page at that location. That didn’t work. 

ChatGPT helped me figure out that the configuration file at “/etc/nginx/nginx.conf” served to me by running the following in CLI 


sudo nano /etc/nginx/nginx.conf


would allow me to set the location of my “index.html” file. Rather than directing Nginx to look anywhere else, I simply copied my files to this location: 

/usr/share/nginx/html

That worked! The temporary page that I uploaded was generated by some creative prompting in ChatGPT. I asked it to create a simple Star Wars opening sequence inspired game. It’s visible here:


https://www.serverfrom.earth

I’ll follow up shortly with some network traffic stats. This site has only been live for <24 hours. 

I’m gonna figure out how to control this interactive page through my VoIP phone system from a couple weeks back. Let’s go! 

 reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.


01 December 2024Analyzing HTTP Logs

Assignment # 5


I’ve had my website running for a few weeks now and it’s been the primary location for people to find my final projects which is also the project that I showed at the ITP Winter Show. You can see it live here: 


https://www.serverfrom.earth


Let’s have a look at some of the HTTP logs over the course of yesterday and today: 


Yesterday I had 192 access events. Today I’ve had 178 through early afternoon. 


The vast majority of the events seem to be coming in from the Mozilla/5.0 browsers but there were some fun outliers: 


    SonyEricssonK810i/R1KG Browser/NetFront/3.3 Profile/MIDP-2.0 Configuration/CLDC-1.1

    LG-LX550 AU-MIC-LX550/2.0 MMP/2.0 Profile/MIDP-2.0 Configuration/CLDC-1.1

    POLARIS/6.01 (BREW 3.1.5; U; en-us; LG; LX265; POLARIS/6.01/WAP) MMP/2.0 profile/MIDP-2.1 Configuration/CLDC-1.1

    msnbot/0.11 ( http://search.msn.com/msnbot.htm)

    BlackBerry9530/4.7.0.167 Profile/MIDP-2.0 Configuration/CLDC-1.1 VendorID/102 UP.Link/6.3.1.20.0

    Nokia6230/2.0 (04.44) Profile/MIDP-2.0 Configuration/CLDC-1.1

    Opera/9.64 (Macintosh; PPC Mac OS X; U; en) Presto/2.1.1


The longer string for most all of the Mozilla requests looked like this: 


    Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36
    

Most of the requests were 1-2 page loads but some of the longer visits were from Singapore @ 18.141.145.195 (31 GET requests), another Singapore @ 18.141.225.233 (also exactly 31 GET requests) and Singapore @ 18.139.255.165 (31 again!). I wonder if this has something to do with how many unique aspects there are across my entire site that are getting checked? Here’s one whole set of 31 GET requests. The lookup says its tied back to an Amazon Datacenter. 






18.141.145.195 - - [18/Dec/2024:09:01:33 +0000] "GET / HTTP/1.1" 200 1265 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.154 Safari/537.36 OPR/20.0.1387.91"

18.141.145.195 - - [18/Dec/2024:09:01:54 +0000] "GET /assets/firebase-Bsfd7Iv5.js HTTP/1.1" 200 332419 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.114 Safari/537.36"

18.141.145.195 - - [18/Dec/2024:09:01:55 +0000] "GET /assets/d3_charts-BTtswhea.js HTTP/1.1" 200 7522 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1) Gecko/20100101 Firefox/4.0.1 Camino/2.2.1"

18.141.145.195 - - [18/Dec/2024:09:01:55 +0000] "GET //cdn.babylonjs.com/babylon.js HTTP/1.1" 404 196 "-" "Mozilla/5.0 (Linux; Android 12; KB2005) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.61 Mobile Safari/537.36"

18.141.145.195 - - [18/Dec/2024:09:01:55 +0000] "GET //cdn.babylonjs.com/babylon.4.2.0.js HTTP/1.1" 404 196 "-" "Mozilla/5.0 (Linux; Android 11; motorola edge 20 fusion) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.61 Mobile Safari/537.36"

18.141.145.195 - - [18/Dec/2024:09:01:55 +0000] "GET /assets/main-DG1HCkgy.js HTTP/1.1" 200 9385 "-" "Mozilla/5.0 (Macintosh; U; PPC Mac OS X; en-US) AppleWebKit/125.4 (KHTML, like Gecko, Safari) OmniWeb/v563.15"

18.141.145.195 - - [18/Dec/2024:09:02:13 +0000] "GET //cdn.babylonjs.com/babylon.4.2.0.js HTTP/1.1" 404 196 "-" "Mozilla/5.0 (X11; Linux i686) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/60.0.3112.78 Chrome/60.0.3112.78 Safari/537.36"

18.141.145.195 - - [18/Dec/2024:09:02:13 +0000] "GET /assets/main-DG1HCkgy.js HTTP/1.1" 200 9385 "-" "SonyEricssonK810i/R1KG Browser/NetFront/3.3 Profile/MIDP-2.0 Configuration/CLDC-1.1"

18.141.145.195 - - [18/Dec/2024:09:02:14 +0000] "GET /assets/d3_charts-BTtswhea.js HTTP/1.1" 200 7522 "-" "Mozilla/5.0 (SymbianOS/9.4; U; Series60/5.0 SonyEricssonP100/01; Profile/MIDP-2.1 Configuration/CLDC-1.1) AppleWebKit/525 (KHTML, like Gecko) Version/3.0 Safari/525"

18.141.145.195 - - [18/Dec/2024:09:02:14 +0000] "GET /assets/firebase-Bsfd7Iv5.js HTTP/1.1" 200 332419 "-" "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.8) Gecko Galeon/2.0.6 (Ubuntu 2.0.6-2)"

18.141.145.195 - - [18/Dec/2024:09:02:15 +0000] "GET //cdn.babylonjs.com/babylon.js HTTP/1.1" 404 134 "-" "LG-LX550 AU-MIC-LX550/2.0 MMP/2.0 Profile/MIDP-2.0 Configuration/CLDC-1.1"

18.141.145.195 - - [18/Dec/2024:09:02:34 +0000] "GET //cdn.babylonjs.com/babylon.4.2.0.js HTTP/1.1" 404 196 "-" "Mozilla/4.0 (compatible; MSIE 6.0; j2me) ReqwirelessWeb/3.5"

18.141.145.195 - - [18/Dec/2024:09:02:34 +0000] "GET /assets/d3_charts-BTtswhea.js HTTP/1.1" 200 7522 "-" "Mozilla/5.0 (Linux; Android 10; Redmi Note 7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.101 Mobile Safari/537.36"

18.141.145.195 - - [18/Dec/2024:09:02:34 +0000] "GET /assets/firebase-Bsfd7Iv5.js HTTP/1.1" 200 332419 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36"

18.141.145.195 - - [18/Dec/2024:09:02:35 +0000] "GET /assets/main-DG1HCkgy.js HTTP/1.1" 200 9385 "-" "Mozilla/5.0 (Linux; Android 11; Redmi Note 8 Pro) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.88 Mobile Safari/537.36 OPR/68.3.3557.64528"

18.141.145.195 - - [18/Dec/2024:09:02:36 +0000] "GET //cdn.babylonjs.com/babylon.js HTTP/1.1" 404 196 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2876.0 Safari/537.36"

18.141.145.195 - - [18/Dec/2024:09:02:55 +0000] "GET //cdn.babylonjs.com/babylon.js HTTP/1.1" 404 134 "-" "Mozilla/5.0 (Windows NT 5.2; rv:10.0.1) Gecko/20100101 Firefox/10.0.1 SeaMonkey/2.7.1"

18.141.145.195 - - [18/Dec/2024:09:02:56 +0000] "GET /assets/main-DG1HCkgy.js HTTP/1.1" 200 9385 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.0; en-GB; rv:1.9.0.11) Gecko/2009060215 Firefox/3.0.11 (.NET CLR 3.5.30729)"

18.141.145.195 - - [18/Dec/2024:09:02:56 +0000] "GET /assets/d3_charts-BTtswhea.js HTTP/1.1" 200 7522 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko, Foregenix) Chrome/91.0.4472.77 Safari/537.36"

18.141.145.195 - - [18/Dec/2024:09:02:56 +0000] "GET //cdn.babylonjs.com/babylon.4.2.0.js HTTP/1.1" 404 134 "-" "Mozilla/5.0 (iPhone; CPU iPhone OS 8_3 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12F70 Safari/600.1.4"

18.141.145.195 - - [18/Dec/2024:09:02:56 +0000] "GET /assets/firebase-Bsfd7Iv5.js HTTP/1.1" 200 332419 "-" "Mozilla/5.0 (Linux; U; Android 1.5; de-de; Galaxy Build/CUPCAKE) AppleWebKit/528.5  (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1"

18.141.145.195 - - [18/Dec/2024:09:03:17 +0000] "GET /assets/d3_charts-BTtswhea.js HTTP/1.1" 200 7522 "-" "POLARIS/6.01 (BREW 3.1.5; U; en-us; LG; LX265; POLARIS/6.01/WAP) MMP/2.0 profile/MIDP-2.1 Configuration/CLDC-1.1"

18.141.145.195 - - [18/Dec/2024:09:03:17 +0000] "GET //cdn.babylonjs.com/babylon.4.2.0.js HTTP/1.1" 404 134 "-" "Mozilla/5.0 (webOS/1.3; U; en-US) AppleWebKit/525.27.1 (KHTML, like Gecko) Version/1.0 Safari/525.27.1 Desktop/1.0"

18.141.145.195 - - [18/Dec/2024:09:03:17 +0000] "GET //cdn.babylonjs.com/babylon.js HTTP/1.1" 404 134 "-" "msnbot/0.11 ( http://search.msn.com/msnbot.htm)"

18.141.145.195 - - [18/Dec/2024:09:03:17 +0000] "GET /assets/main-DG1HCkgy.js HTTP/1.1" 200 9385 "-" "Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/534.15 (KHTML, like Gecko) Ubuntu/10.10 Chromium/10.0.613.0 Chrome/10.0.613.0 Safari/534.15"

18.141.145.195 - - [18/Dec/2024:09:03:17 +0000] "GET /assets/firebase-Bsfd7Iv5.js HTTP/1.1" 200 332419 "-" "Mozilla/5.0 (Linux; Android 11; AC2001) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.101 Mobile Safari/537.36"

18.141.145.195 - - [18/Dec/2024:09:03:36 +0000] "GET //cdn.babylonjs.com/babylon.js HTTP/1.1" 404 196 "-" "Mozilla/5.0 (Linux; Android 8.1.0; TECNO KA7O Build/O11019; wv) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/91.0.4472.120 Mobile Safari/537.36"

18.141.145.195 - - [18/Dec/2024:09:03:36 +0000] "GET /assets/d3_charts-BTtswhea.js HTTP/1.1" 200 7522 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0.3 Safari/605.1.15"

18.141.145.195 - - [18/Dec/2024:09:03:37 +0000] "GET /assets/main-DG1HCkgy.js HTTP/1.1" 200 9385 "-" "Mozilla/5.0 (compatible; MSIE 10.6; Windows NT 6.1; Trident/5.0; InfoPath.2; SLCC1; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; .NET CLR 2.0.50727) 3gpp-gba UNTRUSTED/1.0"

18.141.145.195 - - [18/Dec/2024:09:03:37 +0000] "GET /assets/firebase-Bsfd7Iv5.js HTTP/1.1" 200 332419 "-" "Mozilla/5.0 (Linux; Android 8.0.0; SAMSUNG SM-G935F) AppleWebKit/537.36 (KHTML, like Gecko) SamsungBrowser/12.1 Chrome/79.0.3945.136 Mobile Safari/537.36"

18.141.145.195 - - [18/Dec/2024:09:03:38 +0000] "GET //cdn.babylonjs.com/babylon.4.2.0.js HTTP/1.1" 404 134 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.21 (KHTML, like Gecko) konqueror/4.14.10 Safari/537.21"


Outside of these interesting anomalies, I don’t immediately see any other patterns jump out at me. Times look randomly spread across the day and the visits seem harmless/normal. 

There’s some random robot crawls that are cool to note: 


35.203.211.200 - - [18/Dec/2024:00:51:18 +0000] "GET / HTTP/1.1" 200 1265 "-" "Expanse, a Palo Alto Networks company, searches across the global IPv4 space multiple times per day to identify customers&#39; presences on the Internet. If you would like to be excluded from our scans, please send IP addresses/domains to: scaninfo@paloaltonetworks.com"


185.191.171.13 - - [18/Dec/2024:12:40:01 +0000] "GET /robots.txt HTTP/1.1" 200 1265 "-" "Mozilla/5.0 (compatible; SemrushBot/7~bl; +http://www.semrush.com/bot.html)"


172.206.142.56 - - [17/Dec/2024:23:57:54 +0000] "GET /autodiscover/autodiscover.json?@zdi/Powershell HTTP/1.1" 200 1265 "-" "Mozilla/5.0 zgrab/0.x"









01 December 2024RESTful Microservices

Assignment # 6

One of the previous projects that I did in this class skipped ahead a bit and used microservics

I’m trying to learn more about root vs alias directives as I make the configurations necessary to allow microservices. 

Here’s a brief summary that I found: 


FOR ROOT

In NGINX, the root and alias directives define how requests to specific URLs map to files on the server’s filesystem. While they serve similar purposes, they behave differently, and understanding the distinction is key when configuring your server.


    server {

        location / {
    
            root /var/www/example;

            index index.html;

        }

    }


If a request is made to /about:
  • NGINX looks for the file /var/www/example/about.

If a request is made to /:
  • NGINX serves /var/www/example/index.html (defined by the index directive).


Essentially, the full file path is formed by combining the root path and the requested URI.

Example 

    root: /var/www/example;

    Request: http://example.com/assets/style.css

    Resulting file path: /var/www/example/assets/style.css


FOR ALIAS


The alias directive is used to map a specific URI to a different filesystem directory. It replaces the entire request URI with the specified directory.

    server {

        location /static/ {

            alias /var/www/static-files/;

        }

    }

If a request is made to /static/images/logo.png:
  • NGINX serves the file /var/www/static-files/images/logo.png.

The /static/ part of the URI is not included in the file path.

The alias replaces the entire URI prefix with the specified path.

Example:

    alias /var/www/static-files/;

    Request: http://example.com/static/css/style.css

    Resulting file path: /var/www/static-files/css/style.css

Exploration: 

I already have several pages on my server that are functioning as my final project. I wanted to see if i could add some other configs to also redirect to certain pages on my site. 

For example, I tried adding this to my NGINX configuration at (/etc/nginx/sites-available/particle-aqm) 

    location /aqmsubmit {

        alias /var/www/particle-aqm/submit.html;

    }


When I tried to go to serverfrom.earth/aqmsubmit, it downloaded the local html file that exists at submit.html. Not sure why. I wanted it to simply load the page that exists at submit.html. 


I need to keep going to get this to work; this is NOT the desired outcome haha. 


On a suggestion from chatGPT, I modified to this: 


    location /aqmsubmit {

        root /var/www/particle-aqm;

        index submit.html;

        default_type text/html;

    }


Still no good. Same result. 


Ok - got it working finally with this:


    location /aqmsubmit {

        alias /var/www/particle-aqm/submit.html;

        default_type text/html;

    }


After all that explanation around alias and root I still got it wrong for the first few tries. Functions now though!

Configuring Node and Microservices

I was using Node.js as a means of helping with my server-side code in the game controller. I haven’t yet configured it to do anything related to microservices but I’m excited to dig in there too. 

I followed the instructions verbatim from the class website. The main steps were to add a new location route in my NGINX configuration, create the server.js script (also from the example) and run it. 

It worked! 

Using PM2, I configured this node script to run indefinitely. 

serverfrom.earth/garden delivers the following message: 











01 December 2024MQQT

Assignment # 7

For the MQTT assignment, I built an air quality monitor sensor atop a Raspberry Pi. The two sensors that I’ve attached to the Pi are an SGP30 and PMSA003I and they respectively measure eCo2, total volatile organic compounds (TVOC), and three scales of particulate matter. I’ve integrated some lines of code in my sensor software to send MQTT signals outbound.  

It worked pretty well and as expected when we tested signals in class! 

Here is the code that I used.

Some of the code may have MQQT commented out because I’m no longer sending MQQT signals but am still uploading data to Firebase. This was my final project for other classes as well as what I demonstrated at the show. 







Copyright © 2024

Steven Jos Phan
Instagram

Linkedin
Imagine the piece as a set of disconnected events