Wipe Out Aliyun Server Guard Agent

Why NOT simply do reinstall via panel?

Server Guard Agent (SGA) is installed by default and Aliyun has an official uninstall guide.

However, after uninstalling SGA, your VM still trying connecting SGA remote and uploading encrypted data.

After blocked all IPs of SGA service, an unknown key inserted in my .ssh/ directory. I opened a ticket, they told me "you were hacked". And I found my sshd binary file was modified.

So we will enable full system encryption to make VMs hosted by a provider which cannot be trusted invulnerable.

Prepare your VM

You’ll install your custom distro onto a raw disk, with the direct disk boot option. The end result will be a working custom install; however, it will not support disk resizing from within the official panel, nor will it be compatible with the official backup service.

1. Create two raw, unformatted disk images. Can be done with Aliyun Console.
2. Create two configuration profiles.

Installer profile

Label: Installer
Kernel: Direct Disk
/dev/sda: Boot disk image.
/dev/sdb: Installer disk image.
root / boot device: Standard /dev/sdb

Boot profile

Label: Boot
Kernel: Direct Disk
/dev/sda: Boot disk image.
root / boot device: Standard /dev/sda

Download and Install Image

  1. Boot into Rescue Mode with your Installer disk mounted to /dev/sda, and connect to your VM using the Aliyun Console.
  2. Once in Rescue Mode, download your installation media and copy it to your Installer disk. In this example we’re using the Ubuntu net installer.
wget http://mirror.pnl.gov/releases/xenial/ubuntu-16.04.3-server-amd64.iso
dd if=mini.iso of=/dev/sda
  1. Reboot into your Installer configuration profile
  2. During your installer’s partitioning/installation phase, be sure to instruct it to use the /dev/sda volume and enable full system encryption.
  3. Once the installation completes, reboot into your Boot profile and open the console. You will have access to your VM.
2018/1/11 posted in  Network

Difference between OpenVZ and LXC

Background: What’s a container?

Containers have been around for over 15 years, so why is there an influx of attention for containers? As compute hardware architectures become more elastic, potent, and dense, it becomes possible to run many applications at scale while lowering TCO, eliminating the redundant Kernel and Guest OS code typically used in a hypervisor-based deployment. This is attractive enough but also has benefits such as eliminating performance penalties, increase visibility and decrease difficulty of debug and management.

Because containers share the host kernel, binaries and libraries, can be packed even denser than typical hypervisor environments can pack VM’s.

OpenVZ

OpenVZ is a Linux container solution. It was first released in 2005 by SWSoft, now known as Parallels. Though connected to a private, proprietary company, OpenVZ is open source and available for free.

The previously mentioned container projects have been related to BSD. One fundamental difference between BSD and Linux is that Linux is technically just a kernel. All of the tools that make Linux functional are supplemental and from different projects. For example, the chroot command in Ubuntu Linux comes from the GNU coreutils project.

This distinction between BSD and Linux is quite important in the case of OpenVZ. Because containers require kernel level access, the container code needs to be integrated into the kernel. OpenVZ only released its code as a set of patches and custom-compiled Linux kernels they initially never bothered to get their code into the official Linux kernel.

As explained in a recent OpenVZ blog entry, this was a mistake recognized way back in 2005, and the OpenVZ team has been working to get their code integrated into the main Linux kernel since then. This can sometimes be a very slow and painful process. The Xen project went through the same scenario.

OpenVZ has never really gained widespread acceptance in the Linux community. This is unfortunate since it is a very robust project with a large amount of features.

LXC

Finally, there is LXC. Well, before we get into LXC, let us talk about Linux Namespaces. A namespace is another term for segregation. Items in different namespaces are unable to collide or conflict with each other. Chroot can be thought of as a simple filesystem namespace.

As we have seen with all the other container projects, they implement features beyond filesystem segregation: users, processes, and the network are all also segregated.

Starting in 2001, the Linux kernel began supporting a series of namespaces. The first was mount namespaces, which can be thought of as an enhanced filesystem namespace. Since then, Linux has added support for UTS, IPC, PID, user, and network namespaces. This article goes into great detail about each of them.

Next, a quick mention about control groups otherwise known as cgroups. Cgroups limit the amount of resources a certain process can use. For example, a process could be limited to use just 50% of the total CPU on the server.

Between namespaces and cgroups, the Linux kernel has everything it needs to support a modern container system. And that is exactly what LXC is a collection of utilities that interact with namespaces and cgroups.

So, since LXC uses features native to the Linux kernel, this should make it a better choice over OpenVZ, right? I guess that depends on one's opinion of those features.

The Linux namespace and cgroup code is still in development. For example, user namespaces were only finalized a few months ago. Shortly after, they were found to be heavily exploitable.

Security in general is a very subjective and relational topic: what one person is paranoid of can be of no matter to another person. Security has always been a hot topic with LXC. Here are several different articles on the subject.

This part of the series summarized various existing container solutions. You might have noticed the added detail for the Linux-based solutions especially LXC.

2017/12/31 posted in  Network

BitTorrent Traffic Detection with Deep Flow Inspection

1. What is Deep Flow Inspection(DFI)?

As the name implies, the analysis or the classification of P2P traffic is a flow-based, focusing on the connection level patterns of P2P applications. Thus, it does not require any payload analysis, unlike DPI. Because it doesn’t require payload analysis, encrypted data packets can be easily supported. The down side of this approach is that there is an additional step of extracting the connection level pattern for the P2P traffics. And yet, there is no rule of thumb for which network feature should be used in this method.

2. Proposed System

2.1 Training Module

スクリーンショット 2017-12-23 12.36.10
** Figure 1: Proposed system to classify BT packet flows **

2.2

2.2.1 Ground truth generation

The ground truth is the packet flows with known classes. In order to train a classifier, there are two types of packet flows needed to capture, namely the BT and non-BT packet flows. To capture the BT packets, I manually force the BT client to use a single TCP port (i.e. 1200) for data transfer. Thus, all the BT traffic must go through thisTCP port. Then, I start a sample torrent file and the BT client will automatically start downloading/uploading the contents. At the same time, I start my packet capturing program to obtain the packets.Similarly, to capture non-BT packets, I start my packet capturing program while we were creating non-BT network activities including HTTP,FTP and SSH. With the known class of the packets in the PCAP files, I could start training the classifier.

2.2.2 Study of DFI classifier accuracy

スクリーンショット 2017-12-23 12.38.56
** Figure 2: Classifier accuracy with different training samples **
Figure 2 shows the classifier accuracy with increasing number of BT packet flows used to train the classifier. The classifier was first trained with a set of BT samples, and then it was tested against with some otherBT packet flows to observe the accuracy. This experiment gives us some clues about the number of packet flows should be used in order train a reliable classifier for the DFI module.As expected, the moreBT packets are used to train the classifier, the better the accuracy is. However, as the number of the BT packets increase, the classifier will be saturated at some point. After that, even more packets is provided, the accuracy does not increase significantly.

2.2 Source code:

https://github.com/itsuwari/BitTorrent-Traffic-Detection-with-Deep-Flow-Inspection/

2017/12/23 posted in  Network

Inevitable Comparison: TCP vs UDP

We use Transmission Control Protocol (TCP) and User Datagram Protocol (UDP) to transfer data over the internet.

TCP is the most commonly used protocol because it offers a lot of built-in features such as connection, error-checking and ordering. Also packet delivery is guaranteed.

UDP is also one of the most used protocol. While TCP offers a lot of features, UDP just provides packet throwing. There is no connection, error-checking, ordering etc.

Before talking about use cases, let’s look at their features.
Connection

  • TCP: Connection-oriented (persistent)
  • UDP: Connectionless (fresh air)

Reliability

  • TCP: Reliable (Ordered, Guaranteed)
  • UDP: Unreliable (Drop, Disordering possiblities)

Weight

  • TCP: Heavy (Background mechanisms)
  • UDP: Light (Simply throw packets)
    Transport

  • TCP: Stream (Continous, Ordered)

  • UDP: Datagram (Unrelated delivery)
    Flow Control

  • TCP: Windowing, Congestion Avoidance

  • UDP: Nothing
    Speed

  • TCP: Slow (Resending, Recovering, Error-checking etc.)

  • UDP: Fast (Nothing)

We use TCP for important data because it has reliable and persistent pipeline. For example HTTP (Web), FTP (File), SMTP (Email), SSH (Terminal), SQL (DB Queries) built top of TCP.
We use UDP for unimportant, temporal data because there is no consistent mechanism for reliability or persistance. For example games, VoIP services, media streaming, broadcasting built with UDP.

Choosing the right protocol depends on your needs. Most of developers use TCP because it does pretty much everything built-in also it’s easy as file i/o. My suggestion is use TCP for less frequent, more important data; use UDP for more frequent, less important data.

I tried to tell you basic differences between TCP and UDP protocols but there is one more thing to understand (where the magic begins!): They both developed on Internet Protocol (IP). TCP provides ‘connection’ but connection is an illusion! There is a three-way handshake for connection establishment. Simply, TCP is UDP with advanced features. There were some good developers, they implemented useful solutions for industry needs. Did you ever wanted to go deep into connection establishment, reliability mechanisms? Do you want to implement your own TCP-like protocol?

2017/3/11 posted in  Network