This is the chapter where we'll begin writing all our code. You can download the source files for this primer using the link at the top of the page labeled "Book Source Code". Inside, you'll find the final project code and incremental versions in various "chapter" folders. You can follow along by writing the code as you go, or just jump in at various points by using the chapter folder projects.
Our listening post known as Skytree is going to be built with Flask, a REST API plugin called flask_restful and we'll use a MongoDB database for storage. At a high level, it will need to be capable of serving tasks to an implant, storing a record of tasks that were sent and receiving the results of those tasks. The reason why I chose Flask to build the REST API is because I'm comfortable with programming in Python and it's quick to get started with. Additionally, I think that the source code is pretty easy to read and understand if you're just starting out. I decided to use MongoDB for storage because I'm familiar with it and wanted to use something that would easily ingest JSON results from the implant. I don't have any strong technical reasons for choosing MongoDB, so feel free to modify the source code to use an SQL database if you'd prefer that instead.
Let's take our first step and write out the starting code for our HTTP listening post. Download the source code for this book and unzip it. We're going to be installing a number of Python packages, so I'd recommend using a tool such as virtualenv to have a clean environment for installation. Navigate to the directory called "chapter_2-1". Go to the "Skytree" folder in a terminal window and run pip install wheel, then the pip install -r requirements.txt command to ensure you have the Python library prerequisites for the project installed. For the database, you'll need to install the MongoDB Community Server. You can read a detailed guide on installing it here if you run into any issues. Then, open the "Skytree" folder in your preferred code editor and find the file called listening_post.py. You'll see the following contents:
import jsonimport resourcesfrom flask import Flaskfrom flask_restful import Apifrom database.db import initialize_db# Initialize our Flask appapp =Flask(__name__)# Configure our database on localhostapp.config['MONGODB_SETTINGS']={'host':'mongodb://localhost/skytree'}# Initialize our databaseinitialize_db(app)# Initialize our APIapi =Api(app)# Define the routes for each of our resourcesapi.add_resource(resources.Tasks, '/tasks', endpoint='tasks')# Start the Flask app in debug modeif__name__=='__main__': app.run(debug=True)
Database & Models Starting File
Let's go over each of the major code blocks in the above file. We start by initializing the Flask app and the database:
Go to the "database" folder and you'll see two files:
db.py
models.py
Open db.py and you'll see the following:
from flask_mongoengine import MongoEngine# Initialize MongoEngine and our databasedb =MongoEngine()definitialize_db(app): db.init_app(app)
The above code will initialize the database and it takes our Flask app as input. Open up the models.py file and you'll see where we define our models:
from database.db import db# Define Task object in databaseclassTask(db.DynamicDocument): task_id = db.StringField(required=True)
The above code tells the database about each field we're storing and the kind of data to expect. For simplicity, we're using a "dynamic document" so that we don't need to specify every field. In the task model, we're requiring that an ID be provided to ensure we can keep track of each task and map results back. Each time we add a new resource for the REST API, we'll want to put a corresponding model specification in this file.
REST API & Resources
To facilitate testing the REST API we're building, we'll use a tool called Postman. You do not need an account to use the tool, just select the "skip" option when you first run the application. I find that this tool is useful for experimenting with APIs and easily interacting with them. I've included a Postman Collection file for reference called "Skytree_REST_API.postman_collection.json" in the root directory of the book source code files. You can import this collection and use it to follow along with the API requests referred to in this chapter. Alternatively, I've included PowerShell snippets to make API requests in-case you prefer not to use Postman.
Now, let's go back to the listening_post.py file. In the next block we set up the REST API and specify the resources that map to each of the API endpoints:
# Initialize our APIapi =Api(app)# Define the routes for each of our resourcesapi.add_resource(resources.Tasks, '/tasks', endpoint='tasks')
The "/tasks" endpoint will be responsible for handling creation of tasks and displaying existing tasks. We'll go into more detail about this resource shortly, but we're just defining the route here.
Lastly, in the final block for our listening_post.py file, we start the Flask app in debug mode:
# Start the Flask app in debug modeif__name__=='__main__': app.run(debug=True)
To validate that everything is working as expected with our initial code, try running the command python listening_post.py from inside the "Skytree" folder and then in a browser, visit the following address http://127.0.0.1:5000/tasks. You should see a short message that says "GET success!".
Let's add in the behavior for our "tasks" resource next. Open up the file called resources.py and you'll see the following contents:
import uuidimport jsonfrom flask import request, Responsefrom flask_restful import Resourcefrom database.db import initialize_dbfrom database.models import TaskclassTasks(Resource):# ListTasksdefget(self):# Add behavior for GET herereturn"GET success!",200# AddTasksdefpost(self):# Add behavior for POST herereturn"POST success!",200
Tasks API
We'll define the behavior for GET requests first. Let's get all the Task objects that we have in the database, convert them to JSON format and put them in a variable. Then, we'll return that in the GET response:
# ListTasksdefget(self):# Get all the task objects and return them to the user tasks = Task.objects().to_json()returnResponse(tasks, mimetype="application/json", status=200)
For the POST, let's get the JSON payload from the request body first and find out how many Task objects are in the request. Next, we'll load it into a JSON object and then for each Task object, we'll add a UUID for tracking and save it to the database. Finally, we store everything that comes after "task_type" and "task_id" in a "task_options" array so we can store it in a TaskHistory object later on. We return a response that includes the Task objects that were added to the database.
# AddTasksdefpost(self):# Parse out the JSON body we want to add to the database body = request.get_json() json_obj = json.loads(json.dumps(body))# Get the number of Task objects in the request obj_num =len(body)# For each Task object, add it to the databasefor i inrange(len(body)):# Add a task UUID to each task object for tracking json_obj[i]['task_id'] =str(uuid.uuid4())# Save Task object to databaseTask(**json_obj[i]).save()# Load the options provided for the task into an array for tracking in history task_options = []for key in json_obj[i].keys():# Anything that comes after task_type and task_id is treated as an optionif (key !="task_type"and key !="task_id"): task_options.append(key +": "+ json_obj[i][key])# Return the last Task objects that were addedreturnResponse(Task.objects.skip(Task.objects.count() - obj_num).to_json(), mimetype="application/json", status=200)
Once you're done adding the code for POST, your resources.py file should look like this:
import uuidimport jsonfrom flask import request, Responsefrom flask_restful import Resourcefrom database.db import initialize_dbfrom database.models import TaskclassTasks(Resource):# ListTasksdefget(self):# Get all the task objects and return them to the user tasks = Task.objects().to_json()returnResponse(tasks, mimetype="application/json", status=200)# AddTasksdefpost(self):# Parse out the JSON body we want to add to the database body = request.get_json() json_obj = json.loads(json.dumps(body))# Get the number of Task objects in the request obj_num =len(body)# For each Task object, add it to the databasefor i inrange(len(body)):# Add a task UUID to each task object for tracking json_obj[i]['task_id'] =str(uuid.uuid4())# Save Task object to databaseTask(**json_obj[i]).save()# Load the options provided for the task into an array for tracking in history task_options = []for key in json_obj[i].keys():# Anything that comes after task_type and task_id is treated as an optionif (key !="task_type"and key !="task_id"): task_options.append(key +": "+ json_obj[i][key])# Return the last Task objects that were addedreturnResponse(Task.objects.skip(Task.objects.count() - obj_num).to_json(), mimetype="application/json", status=200)
Let's test out our AddTasks API. Start the listening post with the following:
python listening_post.py
Once the listening post is running, make the following POST request with the following format (we're starting out with a simple "ping" task):
POST /tasks HTTP/1.1Host:localhost:5000Content-Type:application/json[ {"task_type":"ping" }]
You can also make the above POST request with the following PowerShell command lines:
Now, let's test out the ListTasks API by visiting the endpoint (http://127.0.0.1:5000/tasks) in a browser. You should get back a response that looks something like this:
Feel free to play around with the ListTasks and AddTasks APIs. You can add multiple ping tasks and you'll see that each one gets added to the database, then listed in a JSON response when you call ListTasks:
You can find the complete contents of the project so far in the folder called "chapter_2-2". We'll move on now to adding our results APIs, ListResults and AddResults. First, write the following code to define our Result object in the database:
from database.db import db# Define Task object in databaseclassTask(db.DynamicDocument): task_id = db.StringField(required=True)# Define Result object in databaseclassResult(db.DynamicDocument): result_id = db.StringField(required=True)
Next, we'll edit our resources.py file to import the Result database object:
from database.models import Task, Result
Now, we can start adding the logic for the Result APIs with the following boilerplate:
classResults(Resource):# ListResultsdefget(self):# Add behavior for GET herereturn"GET success!",200# AddResultsdefpost(self):# Add behavior for POST herereturn"POST success!",200
The ListResults API can be built by adding the following code to return results as a JSON response to the user:
# ListResultsdefget(self):# Get all the result objects and return them to the user results = Result.objects().to_json()returnResponse(results, mimetype="application.json", status=200)
You'll note that the above code is very similar to the ListTasks API. We'll start building the AddResults API by handling the POST request. We first check if the results returned are empty and if they're populated, we parse out the JSON in the request body. We save each Result object to the database and get the list of Task objects waiting to be served to the implant. We delete the Task objects we're serving to the implant so that tasks are never executed twice. Then, we send the Task objects to the implant in the POST request response:
# AddResultsdefpost(self):# Check if results from the implant are populatedifstr(request.get_json())!='{}':# Parse out the result JSON that we want to add to the database body = request.get_json()print("Received implant response: {}".format(body)) json_obj = json.loads(json.dumps(body))# Add a result UUID to each result object for tracking json_obj['result_id']=str(uuid.uuid4())Result(**json_obj).save()# Serve latest tasks to implant tasks = Task.objects().to_json()# Clear tasks so they don't execute twice Task.objects().delete()returnResponse(tasks, mimetype="application/json", status=200)
We add an "else" check to handle cases where no results are returned and we will simply return the Task objects waiting, then delete them:
else:# Serve latest tasks to implant tasks = Task.objects().to_json()# Clear tasks so they don't execute twice Task.objects().delete()returnResponse(tasks, mimetype="application/json", status=200)
When you're all done, your resources.py file should look like this:
import uuidimport jsonfrom flask import request, Responsefrom flask_restful import Resourcefrom database.db import initialize_dbfrom database.models import Task, ResultclassTasks(Resource):# ListTasksdefget(self):# Get all the task objects and return them to the user tasks = Task.objects().to_json()returnResponse(tasks, mimetype="application/json", status=200)# AddTasksdefpost(self):# Parse out the JSON body we want to add to the database body = request.get_json() json_obj = json.loads(json.dumps(body))# Get the number of Task objects in the request obj_num =len(body)# For each Task object, add it to the databasefor i inrange(len(body)):# Add a task UUID to each task object for tracking json_obj[i]['task_id'] =str(uuid.uuid4())# Save Task object to databaseTask(**json_obj[i]).save()# Load the options provided for the task into an array for tracking in history task_options = []for key in json_obj[i].keys():# Anything that comes after task_type and task_id is treated as an optionif (key !="task_type"and key !="task_id"): task_options.append(key +": "+ json_obj[i][key])# Return the last Task objects that were addedreturnResponse(Task.objects.skip(Task.objects.count() - obj_num).to_json(), mimetype="application/json", status=200)classResults(Resource):# ListResultsdefget(self):# Get all the result objects and return them to the user results = Result.objects().to_json()returnResponse(results, mimetype="application.json", status=200)# AddResultsdefpost(self):# Check if results from the implant are populatedifstr(request.get_json())!='{}':# Parse out the result JSON that we want to add to the database body = request.get_json()print("Received implant response: {}".format(body)) json_obj = json.loads(json.dumps(body))# Add a result UUID to each result object for tracking json_obj['result_id']=str(uuid.uuid4())Result(**json_obj).save()# Serve latest tasks to implant tasks = Task.objects().to_json()# Clear tasks so they don't execute twice Task.objects().delete()returnResponse(tasks, mimetype="application/json", status=200)else:# Serve latest tasks to implant tasks = Task.objects().to_json()# Clear tasks so they don't execute twice Task.objects().delete()returnResponse(tasks, mimetype="application/json", status=200)
The last piece we need to complete the Results APIs are to open up the "listening_post.py" file and add the following code, which associates the Results resource with the "/results" endpoint:
# Define the routes for each of our resourcesapi.add_resource(resources.Tasks, '/tasks', endpoint='tasks')api.add_resource(resources.Results, '/results')
You can test the AddResults API by sending a POST request with the following mock implant result:
POST /results HTTP/1.1Host:localhost:5000Content-Type:application/json{"c839c32a-9338-491b-9d57-30a4bfc4a2e8": {"contents":"PONG!","success":"true" }}
The above POST request can be made with the following PowerShell command lines:
You'll get a response with an empty array if you haven't added any new tasks or you'll get back some tasks if you did add some before calling the AddResults API:
You'll find the project code we've created so far in the folder called "chapter-2-3". The final API we will be adding is ListHistory, this will return all the tasks that were successfully served to the implant and their associated results (when they are received from the implant). Again, let's start with defining the TaskHistory object in our models.py file:
from database.db import db# Define Task object in databaseclassTask(db.DynamicDocument): task_id = db.StringField(required=True)# Define Result object in databaseclassResult(db.DynamicDocument): result_id = db.StringField(required=True)# Define TaskHistory object in databaseclassTaskHistory(db.DynamicDocument): task_object = db.StringField()
Add "TaskHistory" as an import at the top of the resources.py file:
from database.models import Task, Result, TaskHistory
Next, we'll modify our AddTasks API to copy tasks that are sent to the implant into our TaskHistory collection:
# Load the options provided for the task into an array for tracking in historytask_options = []for key in json_obj[i].keys():# Anything that comes after task_type and task_id is treated as an optionif (key !="task_type"and key !="task_id"): task_options.append(key +": "+ json_obj[i][key])# Add to task historyTaskHistory( task_id=json_obj[i]['task_id'], task_type=json_obj[i]['task_type'], task_object=json.dumps(json_obj), task_options=task_options, task_results="").save()
Now, add some scaffolding for our ListHistory API in the resources.py file:
classHistory(Resource):# ListHistorydefget(self):# Add behavior for GET herereturn"GET success!",200
The first thing we'll do in this API is get all the TaskHistory objects and keep them in a variable to return later and store any results we have in a collection so we can match them with tasks:
# ListHistorydefget(self):# Get all the task history objects so we can return them to the user task_history = TaskHistory.objects().to_json()# Update any served tasks with results from implant# Get all the result objects and return them to the user results = Result.objects().to_json() json_obj = json.loads(results)
Next, we format each result to be more friendly to us for consumption and display them with a task_id that has a matching task_results parameters:
# Format each result from the implant to be more friendly for consumption/displayresult_obj_collection = []for i inrange(len(json_obj)):for field in json_obj[i]: result_obj ={"task_id": field,"task_results": json_obj[i][field]} result_obj_collection.append(result_obj)
Finally, we search for any results with a task ID that matches tasks that we served previously and insert them into the corresponding TaskHistory object, then we return the TaskHistory objects to the user:
# For each result in the collection, check for a corresponding task ID and if# there's a match, update it with the results. This is hacky and there's probably# a more elegant solution to update tasks with their results when they come in...for result in result_obj_collection:if TaskHistory.objects(task_id=result["task_id"]): TaskHistory.objects(task_id=result["task_id"]).update_one( set__task_results=result["task_results"])returnResponse(task_history, mimetype="application/json", status=200)
There's probably a more elegant and simple way to do the above, but for now this works for our purposes.
The full resources.py file should look like the following:
import uuidimport jsonfrom flask import request, Responsefrom flask_restful import Resourcefrom database.db import initialize_dbfrom database.models import Task, Result, TaskHistoryclassTasks(Resource):# ListTasksdefget(self):# Get all the task objects and return them to the user tasks = Task.objects().to_json()returnResponse(tasks, mimetype="application/json", status=200)# AddTasksdefpost(self):# Parse out the JSON body we want to add to the database body = request.get_json() json_obj = json.loads(json.dumps(body))# Get the number of Task objects in the request obj_num =len(body)# For each Task object, add it to the databasefor i inrange(obj_num):# Add a task UUID to each task object for tracking json_obj[i]['task_id'] =str(uuid.uuid4())# Save Task object to databaseTask(**json_obj[i]).save()# Load the options provided for the task into an array for tracking in history task_options = []for key in json_obj[i].keys():# Anything that comes after task_type and task_id is treated as an optionif (key !="task_type"and key !="task_id"): task_options.append(key +": "+ json_obj[i][key])# Add to task historyTaskHistory( task_id=json_obj[i]['task_id'], task_type=json_obj[i]['task_type'], task_object=json.dumps(json_obj), task_options=task_options, task_results="" ).save()# Return the last Task objects that were addedreturnResponse(Task.objects.skip(Task.objects.count() - obj_num).to_json(), mimetype="application/json", status=200)classResults(Resource):# ListResultsdefget(self):# Get all the result objects and return them to the user results = Result.objects().to_json()returnResponse(results, mimetype="application.json", status=200)# AddResultsdefpost(self):# Check if results from the implant are populatedifstr(request.get_json())!='{}':# Parse out the result JSON that we want to add to the database body = request.get_json()print("Received implant response: {}".format(body)) json_obj = json.loads(json.dumps(body))# Add a result UUID to each result object for tracking json_obj['result_id']=str(uuid.uuid4())Result(**json_obj).save()# Serve latest tasks to implant tasks = Task.objects().to_json()# Clear tasks so they don't execute twice Task.objects().delete()returnResponse(tasks, mimetype="application/json", status=200)else:# Serve latest tasks to implant tasks = Task.objects().to_json()# Clear tasks so they don't execute twice Task.objects().delete()returnResponse(tasks, mimetype="application/json", status=200)classHistory(Resource):# ListHistorydefget(self):# Get all the task history objects so we can return them to the user task_history = TaskHistory.objects().to_json()# Update any served tasks with results from implant# Get all the result objects and return them to the user results = Result.objects().to_json() json_obj = json.loads(results)# Format each result from the implant to be more friendly for consumption/display result_obj_collection = []for i inrange(len(json_obj)):for field in json_obj[i]: result_obj ={"task_id": field,"task_results": json_obj[i][field]} result_obj_collection.append(result_obj)# For each result in the collection, check for a corresponding task ID and if# there's a match, update it with the results. This is hacky and there's probably# a more elegant solution to update tasks with their results when they come in...for result in result_obj_collection:if TaskHistory.objects(task_id=result["task_id"]): TaskHistory.objects(task_id=result["task_id"]).update_one( set__task_results=result["task_results"])returnResponse(task_history, mimetype="application/json", status=200)
The last thing to add in order to have a fully functional ListHistory API is to add the following in the listening_post.py file:
# Define the routes for each of our resourcesapi.add_resource(resources.Tasks, '/tasks', endpoint='tasks')api.add_resource(resources.Results, '/results')api.add_resource(resources.History, '/history')
That's it! You'll find the complete Skytree listening post project in the folder "chapter_2-4". You can get a list of the task history by visiting the ListHistory endpoint (http://127.0.0.1:5000/history). It'll be empty if you haven't made any AddTask requests. If you make an AddTask request now with a ping task, then call ListHistory, you should get back a response that looks like this:
You'll see that the TaskHistory object consists of the original JSON task object that was sent to the implant and the task options supplied. When the implant returns a result, the task_results field will be updated with the result contents.
Conclusion
Congrats on a job well done! If you've reached this point, you now have a working HTTP listening post that our implant can talk to! We can use this to send new tasks to our implant and get back results of the tasks we send. We also have the ability to associate specific tasks with results and display a history of the tasks that operators have sent.
It's worth restating that this is a very basic listening post, but it covers the core elements that a command and control framework should offer. While you're starting out, it's helpful to keep things simple and work on more advanced techniques/features when you're confident in the basics. Some examples of what could be built in the future include:
Authentication/authorization controls
User management APIs
Listening post initial setup/install script
In the next chapter, we'll get out feet wet with some C++ code and deploy our implant to complete the next major component of our C2 project.
Further Reading & Next Steps
To learn more about building C2 listening posts, see the following resources: