Hack the Box Swag Shop Writeup

- 6 mins read

Hack The Box SwagShop

Introduction

SwagShop was an easy but fun box for me. When this box was active it was also the only way you could buy t-shirts and stickers (now HTB’s shop is publicly available). So, without further blabering, you can read the writeup below.

Information Gathering

Nmap

We begin our reconnaissance by running an Nmap scan checking default scripts and testing for vulnerabilities.

root@kali:~# nmap -sVC -o nmap_SwagShop.txt 10.10.10.140
Starting Nmap 7.70 ( https://nmap.org ) at 2019-06-11 16:30 EDT
Nmap scan report for 10.10.10.140
Host is up (0.41s latency).
Not shown: 998 closed ports
PORT   STATE SERVICE VERSION
22/tcp open  ssh     OpenSSH 7.2p2 Ubuntu 4ubuntu2.8 (Ubuntu Linux; protocol 2.0)
| ssh-hostkey:
|   2048 b6:55:2b:d2:4e:8f:a3:81:72:61:37:9a:12:f6:24:ec (RSA)
|   256 2e:30:00:7a:92:f0:89:30:59:c1:77:56:ad:51:c0:ba (ECDSA)
|_  256 4c:50:d5:f2:70:c5:fd:c4:b2:f0:bc:42:20:32:64:34 (ED25519)
80/tcp open  http    Apache httpd 2.4.18 ((Ubuntu))
|_http-server-header: Apache/2.4.18 (Ubuntu)
|_http-title: Home page
Service Info: OS: Linux; CPE: cpe:/o:linux:linux_kernel

Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 34.88 seconds

From the above output we can see that ports, 22, and 80, are open. So lets checkout what’s going on with port 80.

Hack the Box Luke Writeup

- 9 mins read

Hack The Box Luke

Intro

So they finally retired Luke. While this box was far from one of my favorites, it has a lot of sentimental value to me because it was the first box I rooted after joining Hack The Box. Luke wasn’t all that technically challenging (as you will see in the writeup below). There was a lot of enumeration involved, credential stuffing, a bit of guess work, and no privilege escalation what so ever. It taught me to write down everything during a pentest CTF, even if it seems useless. You never know what you’ll need to use later. All of that said, please find my writeup below.

How I Do My Ctf Writeups

- 4 mins read

I’ve been playing a lot of CTFs this summer. My goal was obviously to brush up on my offensive security skills, but also to practice doing security writeups. I wanted to post the writeups on my blog and publish them as PDFs. Writing the whole thing in a document editor is miserable, I hate using document editors. Then doing the whole thing again as a blog post just means even more work. So, here’s the workflow I developed this summer to do my writeups once using markdown, and easily publish in both formats.

Onion

- 3 mins read

My .onion address: http://ryankozj554xw2ystipdnvpzrge22pkcogw2h5f4n24ztscir6v5d7id.onion/

The other day I thought about also running this website as a hidden service. Today I set all that up. It’ll admit that it’s not all that practical. I’m clearly not hiding who I am, nor am I trying to hide the IP address of my web server, but whatever. It does provide those with extreme privacy concerns the ability to avoid the clearnet while browsing my blog.

How I Manage Passwords

- 4 mins read

password-setup

This post is to outline my personal password management system. It relies entirely upon free and open source software, and it is intended to be self hosted. Passwords are synchronized across multiple devices via an encrypted database file. The database is secured by both a password and a key file, which is to be stored locally.

Required Software

This little system consists of two primary software projects, which are listed below. Do keep in mind the obvious fact that the wrong choice of operating system, network provider, or even hardware, can render the use of these open source projects pointless.

Lunchtime PHP Deobfuscation

- 5 mins read

I came across the bit of code posted below today while browsing Stack Overflow. The user who posted the question was asking what this bit of code actually did. He was aware that it was malicious due to the fact that it was on his server without his knowledge, and obfuscated. Unfortunately the question was marked as off topic, “Questions asking us to recommend or find a book, tool, software library, tutorial or other off-site resource are off-topic”.

I'm Not Flattered: Plagiarism

- 4 mins read

At the moment my blog doesn’t have all that many posts on it, and I really don’t consider myself a serious blogger. I write when I feel like it, and in whatever tone I’m feeling like writing in at the moment. Odd as it may seem, I’m not normally writing with the intent of being read. This doesn’t mean that I don’t care when people read my articles. It’s especially good to receive comments and engage in discussion, but I’m not motivated to find as many readers as possible. I seldom share links to my blog posts on other sites, I simply write posts and visitors find them on search engines, or don’t find them at all.

Goodbye WordPress: Hello Jekyll

- 2 mins read

I’ve been intending on migrating my blog to a static site for quite some time, for security, speed, and generally just being more minimalist. The primary issue preventing me from making the change was fear of commitment. Hugo looked like an excellent option, as did Hyde (not to be confused with the Jekyll Template). I decided to go with Jekyll simply because of the size of its community, and because of GitHub Pages (which I’m still not using until they support HTTPS for custom domains).

Web Scraping is Changing

- 4 mins read

This article isn’t meant to discuss what web scraping is, or why it’s valuable to do. What I intend to focus on instead, is how modern web application architecture is changing how web scraping can/must be performed. A nice article discussing traditional web scraping just appeared in Hacker Newsletter #375 by Vinko Kodžoman. His article tipped my motivation to write this.

Traditional Scraping

Up until recently, data was typically harvested by parsing a site’s markup. Browser automation frameworks allowed this to be achieved in various ways, and I’ve used both Beautiful Soup and Selenium to achieve what I needed to in the past. Vinko discusses in his article another library lxml, which I’ve not tried. His explanation of lxml and how it interacts with the DOM is good enough to allow general understanding of the way scraping is performed. Essentially, your bot reads the markup, and categorizes relevant data for you.

12 Months To Beta: MapMoto

- 4 mins read

It’s been about a year, a little over actually, since I started work on my main side project. The app is a motocross track directory, which isn’t something that doesn’t exist already, but I felt existing track directories were lacking a lot of features. This lead to me creating MapMoto.

mapmoto-motocross-track-directory

An Idea

I ride motocross a lot, not as much as a few years ago, but a lot. I’m always looking up weather before I ride, looking for hot-line numbers to call to confirm days to ride, and looking for new tracks all together, especially when traveling. I wrote down everything I wished a motocross track directory would have, and came up with the follow list.