You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
 
 
 
Go to file
Aloïs Micard a0be5160dc
Start implementing new architecture
4 years ago
.github/workflows Cleanup code 4 years ago
build/docker Start implementing new architecture 4 years ago
cmd Start implementing new architecture 4 years ago
deployments/docker Start implementing new architecture 4 years ago
docs Start implementing new architecture 4 years ago
internal Start implementing new architecture 4 years ago
pkg/proto Start implementing new architecture 4 years ago
scripts Start implementing new architecture 4 years ago
.dockerignore Start implementing new architecture 4 years ago
.gitignore Initial commit 4 years ago
LICENSE Initial commit 4 years ago
README.md Add badge 4 years ago
go.mod Start implementing new architecture 4 years ago
go.sum Start implementing new architecture 4 years ago
snapcraft.yaml Release 0.2.0 4 years ago

README.md

Trandoshan dark web crawler

trandoshan

This repository is a complete rewrite of the Trandoshan dark web crawler. Everything has been written inside a single Git repository to ease maintenance.

Why a rewrite?

The first version of Trandoshan (available here) is working great but not really professional, the code start to be a mess, hard to manage since split in multiple repositories, etc..

I have therefore decided to create & maintain the project in this specific directory, where all process code will be available (as a Go module).

How build the crawler

Since the docker image are not available yet, one must run the following script in order to build the crawler fully.

./scripts/build.sh

How to start the crawler

Execute the /scripts/start.sh and wait for all containers to start. You can start the crawler in detached mode by passing --detach to start.sh

Note

Ensure you have at least 3GB of memory as the Elasticsearch stack docker will require 2GB.

How to start the crawling process

Since the API is explosed on localhost:15005, one can use it to start the crawling process:

feeder --api-uri http://localhost:15005 --url https://www.facebookcorewwwi.onion

this will 'force' the API to publish given URL in crawling queue.

How to view results

At the moment there is no Trandoshan dashboard. You can use the Kibana dashboard available at http://localhost:15004.

You will need to create an index pattern named 'resources', and when it asks for the time field, choose 'time'.