Investigation of BitTorrent Sync (v.2.0) as a P2P Cloud Service (Part 2 ? Log Files artefacts)
[This is a second guest diary by Dr. Ali Dehghantanha. You can find his first diary here. If you would like to propose a guest diary, please let us know]
Continuing the earlier post on the investigation of BitTorrent Sync version 2.0; this post discusses evidence that can be extracted from related log files of BitTorrent Sync version 2.0 on Windows 8.1, Mac OS X Mavericks 10.9.5, and Ubuntu 14.04.1 LTS.
BitTorrent Sync stores logs in the application folder and the filename of which is displayed as ‘sync.log’. The default log size is 100MB and can be modified by the user. When the maximum size is reached, the log file is renamed to sync.log.old, and a new sync.log file will be created. As BitTorrent Sync does not implement an encryption algorithm to secure its logs, the logs could be easily accessible using a text editor. The log file is important as it would aid in identifying BitTorrent Sync events around a specific time of the incident. Table 1 and Table 2 below summarize a list of notable log entries forensic interest from sync.log.
Table 1: Log entries of forensic interest from sync.log.
Relevance |
Examples of log entries obtained in our research |
Enables a practitioner to identify the BitTorrent Sync version installed on the device under investigation. |
version: 2.0.93 |
Assist the practitioner in determining the non-encoded peer ID of the device under investigation. |
|
A master folder will only be created during identity creation. This potentially allows the practitioner to determine when BitTorrent Sync was first used on a device. |
|
May assist the practitioner in determining the IP addresses used by the device under investigation. |
|
Informs the practitioner the IP addresses used by the peer devices. |
|
Allows a practitioner to identify the device names of the peer devices. |
|
Since most peer IDs are stored in base32 format in the metadata and configuration files, these log entries would provide a potential method for identification of the actual (non-encoded) peer IDs from the device names. |
|
May assist the practitioner in determining the share IDs for the shared folders added. |
|
Enables identification of the shared folder names/IDs created on the device under investigation. |
|
Assists the practitioner in determining the synced filenames or folder names as well as the addition/creation times. |
[2015-04-05 08:24:17] JOURNAL[22F5]: setting time for file "\\?\C:\Sync\Enron3111.txt" to 1428193457 [2015-04-05 08:24:17] JOURNAL[22F5]: insert file "\\?\C:\Sync\Enron3111.txt" = 131072:22982 … |
Informs the practitioner folder names for the deleted folders as well as the deletion times. |
|
Allows the practitioner to determine the local identity’s disconnection time. |
|
Table 2: Records of BitTorrent Sync’s Application Programming Interface (API) response bodies (in JSON format) of forensic interest from sync.log.
Relevance |
Examples of log entries obtained in our research |
Provides the practitioner details about the device under investigation such as the peer ID, device name, last online time, last sync completed time, and folder IDs for the shared folders created/added. |
|
Assists the practitioner in determining the pending user requests sent to the device under investigation including the folder IDs (if any), the times when the requests were sent, access permissions, as well as the requester’s IP addresses and certificate fingerprints. |
|
May assist a practitioner in determining the folder names, folder IDs, storage paths, folder sizes, timestamp information, as well as peer device names, peer IDs, and fingerprints associated with the shared folders added by or downloaded to the device under investigation. |
… |
Informs the practitioner the storage path for the device under investigation. |
|
Allows the practitioner to identify the folder name, path, and timestamp references for the shared folders added by the device under investigation. |
|
Contains copy of history.dat file (see section 4.1) at the time of request. |
|
The next post discuss about BitTorrentSync v.2 evidence retrievable from physical memory.
Selecting domains with random names
I often have to go through lists of domains or URLs, and filter out domains that look like random strings of characters (and could thus have been generated by malware using an algorithm).
That's one of the reasons I developed my re-search.py tool. re-search is a tool to search through (text) files with regular expressions. Regular expressions can not be used to identify strings that look random, that's why re-search has methods to enhance regular expressions with this capability.
We will use this list of URLs in our example:
http://didierstevens.com
http://zcczjhbczhbzhj.com
http://www.google.com
http://ryzaocnsyvozkd.com
http://www.microsoft.com
http://ahsnvyetdhfkg.com
Here is an example to extract alphabetical .com domains from file list.txt with a regular expression:
re-search.py [a-z]+\.com list.txt
Output:
didierstevens.com
zcczjhbczhbzhj.com
google.com
ryzaocnsyvozkd.com
microsoft.com
ahsnvyetdhfkg.com
Detecting random looking domains is done with a method I call "gibberish detection", and it is implemented by prefixing the regular expression with a comment. Regular expressions can contain comments, like programming languages. This is a comment for regular expressions: (?#comment).
If you use re-search with regular expression comments, nothing special happens:
re-search.py "(?#comment)[a-z]+\.com" list.txt
However, if your regular expression comment prefixes the regular expression, and the comment starts with keyword extra=, then you can use gibberish detection (and other methods, use re-search.py -m for a complete manual).
To use gibberisch detection, you use directive S (S stands for sensical). If you want to filter all strings that match the regular expression and are gibberish, you use the following regular expression comment: (?#extra=S:g). :g means that you want to filter for gibberish.
Here is an example to extract alphabetical .com domains from file list.txt with a regular expression that are gibberish:
re-search.py "(?#extra=S:g)[a-z]+\.com" list.txt
Output:
zcczjhbczhbzhj.com
ryzaocnsyvozkd.com
ahsnvyetdhfkg.com
If you want to filter all strings that match the regular expression and are not gibberish, you use the following regular expression comment: (?#extra=S:s). :s means that you want to filter for sensical strings.
Classifying a string as gibberish or not, is done with a set of classes that I developed based on work done by rrenaud at https://github.com/rrenaud/Gibberish-Detector. The training text is a public domain book in the Sherlock Holmes series. This means that English text is used for gibberish classification. You can provide your own trained pickle file with option -s.
Didier Stevens
Microsoft MVP Consumer Security
blog.DidierStevens.com DidierStevensLabs.com
Comments