Merge branch 'master' of github.com:beardog108/onionr

master
Kevin 2018-08-20 14:26:48 -05:00
commit 823bcc48b9
62 changed files with 4070 additions and 1602 deletions

4
.dockerignore Normal file
View File

@ -0,0 +1,4 @@
onionr/data/**/*
onionr/data
RUN-WINDOWS.bat
MY-RUN.sh

6
.gitlab-ci.yml Normal file
View File

@ -0,0 +1,6 @@
test:
script:
- apt-get update -qy
- apt-get install -y python3-dev python3-pip tor
- pip3 install -r requirements.txt
- make test

28
Dockerfile Normal file
View File

@ -0,0 +1,28 @@
FROM ubuntu:bionic
#Base settings
ENV HOME /root
#Install needed packages
RUN apt update && apt install -y python3 python3-dev python3-pip tor locales nano
RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \
locale-gen
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
WORKDIR /srv/
ADD ./requirements.txt /srv/requirements.txt
RUN pip3 install -r requirements.txt
WORKDIR /root/
#Add Onionr source
COPY . /root/
VOLUME /root/data/
#Set upstart command
CMD bash
#Expose ports
EXPOSE 8080

View File

@ -1,32 +1,34 @@
PREFIX = /usr/local
.DEFAULT_GOAL := setup .DEFAULT_GOAL := setup
setup: setup:
sudo pip3 install -r requirements.txt sudo pip3 install -r requirements.txt
-@cd onionr/static-data/ui/; ./compile.py
install: install:
sudo rm -rf /usr/share/onionr/ cp -rfp ./onionr $(DESTDIR)$(PREFIX)/share/onionr
sudo rm -f /usr/bin/onionr echo '#!/bin/sh' > $(DESTDIR)$(PREFIX)/bin/onionr
sudo cp -rp ./onionr /usr/share/onionr echo 'cd $(DESTDIR)$(PREFIX)/share/onionr' > $(DESTDIR)$(PREFIX)/bin/onionr
sudo sh -c "echo \"#!/bin/sh\ncd /usr/share/onionr/\n./onionr.py \\\"\\\$$@\\\"\" > /usr/bin/onionr" echo './onionr "$$@"' > $(DESTDIR)$(PREFIX)/bin/onionr
sudo chmod +x /usr/bin/onionr chmod +x $(DESTDIR)$(PREFIX)/bin/onionr
sudo chown -R `whoami` /usr/share/onionr/
uninstall: uninstall:
sudo rm -rf /usr/share/onionr rm -rf $(DESTDIR)$(PREFIX)/share/onionr
sudo rm -f /usr/bin/onionr rm -f $(DESTDIR)$(PREFIX)/bin/onionr
test: test:
@./RUN-LINUX.sh stop @./RUN-LINUX.sh stop
@sleep 1 @sleep 1
@rm -rf onionr/data-backup @rm -rf onionr/data-backup
@mv onionr/data onionr/data-backup | true > /dev/null 2>&1 @mv onionr/data onionr/data-backup | true > /dev/null 2>&1
-@cd onionr; ./tests.py; ./cryptotests.py; -@cd onionr; ./tests.py;
@rm -rf onionr/data @rm -rf onionr/data
@mv onionr/data-backup onionr/data | true > /dev/null 2>&1 @mv onionr/data-backup onionr/data | true > /dev/null 2>&1
soft-reset: soft-reset:
@echo "Soft-resetting Onionr..." @echo "Soft-resetting Onionr..."
rm -f onionr/data/blocks/*.dat onionr/data/*.db | true > /dev/null 2>&1 rm -f onionr/data/blocks/*.dat onionr/data/*.db onionr/data/block-nonces.dat | true > /dev/null 2>&1
@./RUN-LINUX.sh version | grep -v "Failed" --color=always @./RUN-LINUX.sh version | grep -v "Failed" --color=always
reset: reset:

View File

@ -1,34 +1,2 @@
BLOCK HEADERS (simple ID system to identify block type)
-----------------------------------------------
-crypt- (encrypted block)
-bin- (binary file)
-txt- (plaintext)
HTTP API HTTP API
------------------------------------------------ TODO
/client/ (Private info, not publicly accessible)
- hello
- hello world
- shutdown
- exit onionr
- stats
- show node stats
/public/
- firstConnect
- initialize with peer
- ping
- pong
- setHMAC
- set a created symmetric key
- getDBHash
- get the hash of the current hash database state
- getPGP
- export node's PGP public key
- getData
- get a data block
- getBlockHashes
- get a list of the node's hashes
-------------------------------------------------

View File

@ -1,57 +0,0 @@
# Onionr Protocol Spec v2
A P2P platform for Tor & I2P
# Overview
Onionr is an encrypted microblogging & mailing system designed in the spirit of Twitter.
There are no central servers and all traffic is peer to peer by default (routed via Tor or I2P).
User IDs are simply Tor onion service/I2P host id + Ed25519 key fingerprint.
Private blocks are only able to be read by the intended peer.
All traffic is over Tor/I2P, connecting only to Tor onion and I2P hidden services.
## Goals:
• Selective sharing of information
• Secure & semi-anonymous direct messaging
• Forward secrecy
• Defense in depth
• Data should be secure for years to come
• Decentralization
* Avoid browser-based exploits that plague similar software
* Avoid timing attacks & unexpected metadata leaks
## Protocol
Onionr nodes use HTTP (over Tor/I2P) to exchange keys, metadata, and blocks. Blocks are identified by their sha3_256 hash. Nodes sync a table of blocks hashes and attempt to download blocks they do not yet have from random peers.
Blocks may be encrypted using Curve25519 or Salsa20.
Blocks have IDs in the following format:
-Optional hash of public key of publisher (base64)-optional signature (non-optional if publisher is specified) (Base64)-block type-block hash(sha3-256)
pubkeyHash-signature-type-hash
## Connections
When a node first comes online, it attempts to bootstrap using a default list provided by a client.
When two peers connect, they exchange Ed25519 keys (if applicable) then Salsa20 keys.
Salsa20 keys are regenerated either every X many communications with a peer or every X minutes.
Every 100kb or every 2 hours is a recommended default.
All valid requests with HMAC should be recorded until used HMAC's expiry to prevent replay attacks.
Peer Types
* Friends:
* Encrypted friends only posts to one another
* Usually less strict rate & storage limits
* Strangers:
* Used for storage of encrypted or public information
* Can only read public posts
* Usually stricter rate & storage limits
## Spam mitigation
To send or receive data, a node can optionally request that the other node generate a hash that when in hexadecimal representation contains a random string at a random location in the string. Clients will configure what difficulty to request, and what difficulty is acceptable for themselves to perform. Difficulty should correlate with recent network & disk usage and data size. Friends can be configured to have less strict (to non existent) limits, separately from strangers. (proof of work).
Rate limits can be strict, as Onionr is not intended to be an instant messaging application.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 7.1 KiB

BIN
docs/onionr-web.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

97
docs/whitepaper.md Normal file
View File

@ -0,0 +1,97 @@
<p align="center">
<img src="onionr-logo.png" alt="<h1>Onionr</h1>">
</p>
<p align="center">Anonymous, Decentralized, Distributed Network</p>
# Introduction
The most important thing in the modern world is information. The ability to communicate freely with others. The internet has provided humanity with the ability to spread information globally, but there are many people who try (and sometimes succeed) to stifle the flow of information.
Internet censorship comes in many forms, state censorship, corporate consolidation of media, threats of violence, network exploitation (e.g. denial of service attacks).
To prevent censorship or loss of information, these measures must be in place:
* Resistance to censorship of underlying infrastructure or of network hosts
* Anonymization of users by default
* The Inability to violently coerce human users (personal threats/"doxxing", or totalitarian regime censorship)
* Economic availability. A system should not rely on a single device to be constantly online, and should not be overly expensive to use. The majority of people in the world own cell phones, but comparatively few own personal computers, particularly in developing countries.
There are many great projects that tackle decentralization and privacy issues, but there are none which tackle all of the above issues. Some of the existing networks have also not worked well in practice, or are more complicated than they need to be.
# Onionr Design Goals
When designing Onionr we had these goals in mind:
* Anonymous Blocks
* Difficult to determine block creator or users regardless of transport used
* Default Anonymous Transport Layer
* Tor and I2P
* Transport agnosticism
* Default global sync, but can configure what blocks to seed
* Spam resistance
* Encrypted blocks
# Onionr Design
(See the spec for specific details)
## General Overview
At its core, Onionr is merely a description for storing data in self-verifying packages ("blocks"). These blocks can be encrypted to a user (or self), encrypted symmetrically, or not at all. Blocks can be signed by their creator, but regardless, they are self-verifying due to being identified by a sha3-256 hash value; once a block is created, it cannot be modified.
Onionr exchanges a list of blocks between all nodes. By default, all nodes download and share all other blocks, however this is configurable.
## User IDs
User IDs are simply Ed25519 public keys. They are represented in Base32 format, or encoded using the [PGP Word List](https://en.wikipedia.org/wiki/PGP_word_list).
Public keys can be generated deterministicly with a password using a key derivation function (Argon2id). This password can be shared between many users in order to share data anonymously among a group, using only 1 password. This is useful in some cases, but is risky, as if one user causes the key to be compromised and does not notify the group or revoke the key, there is no way to know.
## Nodes
Although Onionr is transport agnostic, the only supported transports in the reference implemetation are Tor .onion services and I2P hidden services. Nodes announce their address on creation.
### Node Profiling
To mitigate maliciously slow or unreliable nodes, Onionr builds a profile on nodes it connects to. Nodes are assigned a score, which raises based on the amount of successful block transfers, speed, and reliabilty of a node, and reduces based on how unreliable a node is. If a node is unreachable for over 24 hours after contact, it is forgotten. Onionr can also prioritize connection to 'friend' nodes.
## Block Format
Onionr blocks are very simple. They are structured in two main parts: a metadata section and a data section, with a line feed delimiting where metadata ends and data begins.
Metadata defines what kind of data is in a block, signature data, encryption settings, and other arbitrary information.
Optionally, a random token can be inserted into the metadata for use in Proof of Work.
### Block Encryption
For encryption, Onionr uses ephemeral Curve25519 keys for key exchange and XSalsa20-Poly1305 as a symmetric cipher, or optionally using only XSalsa20-Poly1305 with a pre-shared key.
Regardless of encryption, blocks can be signed internally using Ed25519.
## Block Exchange
Blocks can be exchanged using any method, as they are not reliant on any other blocks.
By default, every node shares a list of the blocks it is sharing, and will download any blocks it does not yet have.
## Spam mitigation and block storage time
By default, an Onionr node adjusts the target difficulty for blocks to be accepted based on the percent of disk usage allocated to Onionr.
Blocks are stored indefinitely until the allocated space is filled, at which point Onionr will remove the oldest blocks as needed, save for "pinned" blocks, which are permanently stored.
## Block Timestamping
Onionr can provide evidence of when a block was inserted by requesting other users to sign a hash of the current time with the block data hash: sha3_256(time + sha3_256(block data)).
This can be done either by the creator of the block prior to generation, or by any node after insertion.
In addition, randomness beacons such as the one operated by [NIST](https://beacon.nist.gov/home) or the hash of the latest blocks in a cryptocurrency network could be used to affirm that a block was at least not *created* before a given time.
# Direct Connections
We propose a system to

View File

@ -18,18 +18,21 @@
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import flask import flask
from flask import request, Response, abort from flask import request, Response, abort, send_from_directory
from multiprocessing import Process from multiprocessing import Process
from gevent.wsgi import WSGIServer from gevent.wsgi import WSGIServer
import sys, random, threading, hmac, hashlib, base64, time, math, os, logger, config import sys, random, threading, hmac, hashlib, base64, time, math, os, json
from core import Core from core import Core
from onionrblockapi import Block from onionrblockapi import Block
import onionrutils, onionrcrypto import onionrutils, onionrexceptions, onionrcrypto, blockimporter, onionrevents as events, logger, config
class API: class API:
''' '''
Main HTTP API (Flask) Main HTTP API (Flask)
''' '''
callbacks = {'public' : {}, 'private' : {}}
def validateToken(self, token): def validateToken(self, token):
''' '''
Validate that the client token matches the given token Validate that the client token matches the given token
@ -42,6 +45,30 @@ class API:
except TypeError: except TypeError:
return False return False
def guessMime(path):
'''
Guesses the mime type from the input filename
'''
mimetypes = {
'html' : 'text/html',
'js' : 'application/javascript',
'css' : 'text/css',
'png' : 'image/png',
'jpg' : 'image/jpeg'
}
for mimetype in mimetypes:
logger.debug(path + ' endswith .' + mimetype + '?')
if path.endswith('.%s' % mimetype):
logger.debug('- True!')
return mimetypes[mimetype]
else:
logger.debug('- no')
logger.debug('%s not in %s' % (path, mimetypes))
return 'text/plain'
def __init__(self, debug): def __init__(self, debug):
''' '''
Initialize the api server, preping variables for later use Initialize the api server, preping variables for later use
@ -73,6 +100,7 @@ class API:
self.i2pEnabled = config.get('i2p.host', False) self.i2pEnabled = config.get('i2p.host', False)
self.mimeType = 'text/plain' self.mimeType = 'text/plain'
self.overrideCSP = False
with open('data/time-bypass.txt', 'w') as bypass: with open('data/time-bypass.txt', 'w') as bypass:
bypass.write(self.timeBypassToken) bypass.write(self.timeBypassToken)
@ -92,7 +120,6 @@ class API:
Simply define the request as not having yet failed, before every request. Simply define the request as not having yet failed, before every request.
''' '''
self.requestFailed = False self.requestFailed = False
return return
@app.after_request @app.after_request
@ -102,17 +129,85 @@ class API:
#else: #else:
# resp.headers['server'] = 'Onionr' # resp.headers['server'] = 'Onionr'
resp.headers['Content-Type'] = self.mimeType resp.headers['Content-Type'] = self.mimeType
if not self.overrideCSP:
resp.headers["Content-Security-Policy"] = "default-src 'none'; script-src 'none'; object-src 'none'; style-src data: 'unsafe-inline'; img-src data:; media-src 'none'; frame-src 'none'; font-src 'none'; connect-src 'none'" resp.headers["Content-Security-Policy"] = "default-src 'none'; script-src 'none'; object-src 'none'; style-src data: 'unsafe-inline'; img-src data:; media-src 'none'; frame-src 'none'; font-src 'none'; connect-src 'none'"
resp.headers['X-Frame-Options'] = 'deny' resp.headers['X-Frame-Options'] = 'deny'
resp.headers['X-Content-Type-Options'] = "nosniff" resp.headers['X-Content-Type-Options'] = "nosniff"
resp.headers['server'] = 'Onionr' resp.headers['server'] = 'Onionr'
# reset to text/plain to help prevent browser attacks # reset to text/plain to help prevent browser attacks
if self.mimeType != 'text/plain':
self.mimeType = 'text/plain' self.mimeType = 'text/plain'
self.overrideCSP = False
return resp return resp
@app.route('/www/private/<path:path>')
def www_private(path):
startTime = math.floor(time.time())
if request.args.get('timingToken') is None:
timingToken = ''
else:
timingToken = request.args.get('timingToken')
if not config.get("www.private.run", True):
abort(403)
self.validateHost('private')
endTime = math.floor(time.time())
elapsed = endTime - startTime
if not hmac.compare_digest(timingToken, self.timeBypassToken):
if elapsed < self._privateDelayTime:
time.sleep(self._privateDelayTime - elapsed)
return send_from_directory('static-data/www/private/', path)
@app.route('/www/public/<path:path>')
def www_public(path):
if not config.get("www.public.run", True):
abort(403)
self.validateHost('public')
return send_from_directory('static-data/www/public/', path)
@app.route('/ui/<path:path>')
def ui_private(path):
startTime = math.floor(time.time())
'''
if request.args.get('timingToken') is None:
timingToken = ''
else:
timingToken = request.args.get('timingToken')
'''
if not config.get("www.ui.run", True):
abort(403)
if config.get("www.ui.private", True):
self.validateHost('private')
else:
self.validateHost('public')
'''
endTime = math.floor(time.time())
elapsed = endTime - startTime
if not hmac.compare_digest(timingToken, self.timeBypassToken):
if elapsed < self._privateDelayTime:
time.sleep(self._privateDelayTime - elapsed)
'''
logger.debug('Serving %s' % path)
self.mimeType = API.guessMime(path)
self.overrideCSP = True
return send_from_directory('static-data/www/ui/dist/', path, mimetype = API.guessMime(path))
@app.route('/client/') @app.route('/client/')
def private_handler(): def private_handler():
if request.args.get('timingToken') is None: if request.args.get('timingToken') is None:
@ -132,6 +227,9 @@ class API:
if not self.validateToken(token): if not self.validateToken(token):
abort(403) abort(403)
events.event('webapi_private', onionr = None, data = {'action' : action, 'data' : data, 'timingToken' : timingToken, 'token' : token})
self.validateHost('private') self.validateHost('private')
if action == 'hello': if action == 'hello':
resp = Response('Hello, World! ' + request.host) resp = Response('Hello, World! ' + request.host)
@ -141,17 +239,120 @@ class API:
resp = Response('Goodbye') resp = Response('Goodbye')
elif action == 'ping': elif action == 'ping':
resp = Response('pong') resp = Response('pong')
elif action == 'stats': elif action == "insertBlock":
resp = Response('me_irl') response = {'success' : False, 'reason' : 'An unknown error occurred'}
raise Exception
elif action == 'site': if not ((data is None) or (len(str(data).strip()) == 0)):
block = data try:
siteData = self._core.getData(data) decoded = json.loads(data)
response = 'not found'
if siteData != '' and siteData != False: block = Block()
self.mimeType = 'text/html'
response = siteData.split(b'-', 2)[-1] sign = False
resp = Response(response)
for key in decoded:
val = decoded[key]
key = key.lower()
if key == 'type':
block.setType(val)
elif key in ['body', 'content']:
block.setContent(val)
elif key == 'parent':
block.setParent(val)
elif key == 'sign':
sign = (str(val).lower() == 'true')
hash = block.save(sign = sign)
if not hash is False:
response['success'] = True
response['hash'] = hash
response['reason'] = 'Successfully wrote block to file'
else:
response['reason'] = 'Failed to save the block'
except Exception as e:
logger.warn('insertBlock api request failed', error = e)
logger.debug('Here\'s the request: %s' % data)
else:
response = {'success' : False, 'reason' : 'Missing `data` parameter.', 'blocks' : {}}
resp = Response(json.dumps(response))
elif action == 'searchBlocks':
response = {'success' : False, 'reason' : 'An unknown error occurred', 'blocks' : {}}
if not ((data is None) or (len(str(data).strip()) == 0)):
try:
decoded = json.loads(data)
type = None
signer = None
signed = None
parent = None
reverse = False
limit = None
for key in decoded:
val = decoded[key]
key = key.lower()
if key == 'type':
type = str(val)
elif key == 'signer':
if isinstance(val, list):
signer = val
else:
signer = str(val)
elif key == 'signed':
signed = (str(val).lower() == 'true')
elif key == 'parent':
parent = str(val)
elif key == 'reverse':
reverse = (str(val).lower() == 'true')
elif key == 'limit':
limit = 10000
if val is None:
val = limit
limit = min(limit, int(val))
blockObjects = Block.getBlocks(type = type, signer = signer, signed = signed, parent = parent, reverse = reverse, limit = limit)
logger.debug('%s results for query %s' % (len(blockObjects), decoded))
blocks = list()
for block in blockObjects:
blocks.append({
'hash' : block.getHash(),
'type' : block.getType(),
'content' : block.getContent(),
'signature' : block.getSignature(),
'signedData' : block.getSignedData(),
'signed' : block.isSigned(),
'valid' : block.isValid(),
'date' : (int(block.getDate().strftime("%s")) if not block.getDate() is None else None),
'parent' : (block.getParent().getHash() if not block.getParent() is None else None),
'metadata' : block.getMetadata(),
'header' : block.getHeader()
})
response['success'] = True
response['blocks'] = blocks
response['reason'] = 'Success'
except Exception as e:
logger.warn('searchBlock api request failed', error = e)
logger.debug('Here\'s the request: %s' % data)
else:
response = {'success' : False, 'reason' : 'Missing `data` parameter.', 'blocks' : {}}
resp = Response(json.dumps(response))
elif action in API.callbacks['private']:
resp = Response(str(getCallback(action, scope = 'private')(request)))
else: else:
resp = Response('(O_o) Dude what? (invalid command)') resp = Response('(O_o) Dude what? (invalid command)')
endTime = math.floor(time.time()) endTime = math.floor(time.time())
@ -175,6 +376,68 @@ class API:
resp = Response("") resp = Response("")
return resp return resp
@app.route('/public/upload/', methods=['POST'])
def blockUpload():
self.validateHost('public')
resp = 'failure'
try:
data = request.form['block']
except KeyError:
logger.warn('No block specified for upload')
pass
else:
if sys.getsizeof(data) < 100000000:
try:
if blockimporter.importBlockFromData(data, self._core):
resp = 'success'
else:
logger.warn('Error encountered importing uploaded block')
except onionrexceptions.BlacklistedBlock:
logger.debug('uploaded block is blacklisted')
pass
resp = Response(resp)
return resp
@app.route('/public/announce/', methods=['POST'])
def acceptAnnounce():
self.validateHost('public')
resp = 'failure'
powHash = ''
randomData = ''
newNode = ''
ourAdder = self._core.hsAddress.encode()
try:
newNode = request.form['node'].encode()
except KeyError:
logger.warn('No block specified for upload')
pass
else:
try:
randomData = request.form['random']
randomData = base64.b64decode(randomData)
except KeyError:
logger.warn('No random data specified for upload')
else:
nodes = newNode + self._core.hsAddress.encode()
nodes = self._core._crypto.blake2bHash(nodes)
powHash = self._core._crypto.blake2bHash(randomData + nodes)
try:
powHash = powHash.decode()
except AttributeError:
pass
if powHash.startswith('0000'):
try:
newNode = newNode.decode()
except AttributeError:
pass
if self._core.addAddress(newNode):
resp = 'Success'
else:
logger.warn(newNode.decode() + ' failed to meet POW: ' + powHash)
resp = Response(resp)
return resp
@app.route('/public/') @app.route('/public/')
def public_handler(): def public_handler():
# Public means it is publicly network accessible # Public means it is publicly network accessible
@ -186,6 +449,9 @@ class API:
data = data data = data
except: except:
data = '' data = ''
events.event('webapi_public', onionr = None, data = {'action' : action, 'data' : data, 'requestingPeer' : requestingPeer, 'request' : request})
if action == 'firstConnect': if action == 'firstConnect':
pass pass
elif action == 'ping': elif action == 'ping':
@ -196,22 +462,11 @@ class API:
resp = Response(self._utils.getBlockDBHash()) resp = Response(self._utils.getBlockDBHash())
elif action == 'getBlockHashes': elif action == 'getBlockHashes':
resp = Response('\n'.join(self._core.getBlockList())) resp = Response('\n'.join(self._core.getBlockList()))
elif action == 'directMessage':
resp = Response(self._core.handle_direct_connection(data))
elif action == 'announce':
if data != '':
# TODO: require POW for this
if self._core.addAddress(data):
resp = Response('Success')
else:
resp = Response('')
else:
resp = Response('')
# setData should be something the communicator initiates, not this api # setData should be something the communicator initiates, not this api
elif action == 'getData': elif action == 'getData':
resp = '' resp = ''
if self._utils.validateHash(data): if self._utils.validateHash(data):
if not os.path.exists('data/blocks/' + data + '.db'): if os.path.exists('data/blocks/' + data + '.dat'):
block = Block(hash=data.encode(), core=self._core) block = Block(hash=data.encode(), core=self._core)
resp = base64.b64encode(block.getRaw().encode()).decode() resp = base64.b64encode(block.getRaw().encode()).decode()
if len(resp) == 0: if len(resp) == 0:
@ -227,6 +482,8 @@ class API:
peers = self._core.listPeers(getPow=True) peers = self._core.listPeers(getPow=True)
response = ','.join(peers) response = ','.join(peers)
resp = Response(response) resp = Response(response)
elif action in API.callbacks['public']:
resp = Response(str(getCallback(action, scope = 'public')(request)))
else: else:
resp = Response("") resp = Response("")
@ -243,7 +500,6 @@ class API:
def authFail(err): def authFail(err):
self.requestFailed = True self.requestFailed = True
resp = Response("403") resp = Response("403")
return resp return resp
@app.errorhandler(401) @app.errorhandler(401)
@ -256,11 +512,13 @@ class API:
logger.info('Starting client on ' + self.host + ':' + str(bindPort) + '...', timestamp=False) logger.info('Starting client on ' + self.host + ':' + str(bindPort) + '...', timestamp=False)
try: try:
while len(self._core.hsAddress) == 0:
self._core.refreshFirstStartVars()
time.sleep(0.5)
self.http_server = WSGIServer((self.host, bindPort), app) self.http_server = WSGIServer((self.host, bindPort), app)
self.http_server.serve_forever() self.http_server.serve_forever()
except KeyboardInterrupt: except KeyboardInterrupt:
pass pass
#app.run(host=self.host, port=bindPort, debug=False, threaded=True)
except Exception as e: except Exception as e:
logger.error(str(e)) logger.error(str(e))
logger.fatal('Failed to start client on ' + self.host + ':' + str(bindPort) + ', exiting...') logger.fatal('Failed to start client on ' + self.host + ':' + str(bindPort) + ', exiting...')
@ -297,3 +555,31 @@ class API:
# we exit rather than abort to avoid fingerprinting # we exit rather than abort to avoid fingerprinting
logger.debug('Avoiding fingerprinting, exiting...') logger.debug('Avoiding fingerprinting, exiting...')
sys.exit(1) sys.exit(1)
def setCallback(action, callback, scope = 'public'):
if not scope in API.callbacks:
return False
API.callbacks[scope][action] = callback
return True
def removeCallback(action, scope = 'public'):
if (not scope in API.callbacks) or (not action in API.callbacks[scope]):
return False
del API.callbacks[scope][action]
return True
def getCallback(action, scope = 'public'):
if (not scope in API.callbacks) or (not action in API.callbacks[scope]):
return None
return API.callbacks[scope][action]
def getCallbacks(scope = None):
if (not scope is None) and (scope in API.callbacks):
return API.callbacks[scope]
return API.callbacks

46
onionr/blockimporter.py Normal file
View File

@ -0,0 +1,46 @@
'''
Onionr - P2P Microblogging Platform & Social network
Import block data and save it
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import core, onionrexceptions, logger
def importBlockFromData(content, coreInst):
retData = False
dataHash = coreInst._crypto.sha3Hash(content)
if coreInst._blacklist.inBlacklist(dataHash):
raise onionrexceptions.BlacklistedBlock('%s is a blacklisted block' % (dataHash,))
if not isinstance(coreInst, core.Core):
raise Exception("coreInst must be an Onionr core instance")
try:
content = content.encode()
except AttributeError:
pass
metas = coreInst._utils.getBlockMetadataFromData(content) # returns tuple(metadata, meta), meta is also in metadata
metadata = metas[0]
if coreInst._utils.validateMetadata(metadata, metas[2]): # check if metadata is valid
if coreInst._crypto.verifyPow(content): # check if POW is enough/correct
logger.info('Block passed proof, saving.')
blockHash = coreInst.setData(content)
coreInst.addToBlockDB(blockHash, dataSaved=True)
coreInst._utils.processBlockMetadata(blockHash) # caches block metadata values to block database
retData = True
return retData

View File

@ -1,783 +0,0 @@
#!/usr/bin/env python3
'''
Onionr - P2P Microblogging Platform & Social network.
This file contains both the OnionrCommunicate class for communcating with peers
and code to operate as a daemon, getting commands from the command queue database (see core.Core.daemonQueue)
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import sqlite3, requests, hmac, hashlib, time, sys, os, math, logger, urllib.parse, base64, binascii, random, json, threading
import core, onionrutils, onionrcrypto, netcontroller, onionrproofs, config, onionrplugins as plugins
from onionrblockapi import Block
class OnionrCommunicate:
def __init__(self, debug, developmentMode):
'''
OnionrCommunicate
This class handles communication with nodes in the Onionr network.
'''
self._core = core.Core()
self._utils = onionrutils.OnionrUtils(self._core)
self._crypto = onionrcrypto.OnionrCrypto(self._core)
self._netController = netcontroller.NetController(0) # arg is the HS port but not needed rn in this file
self.newHashes = {} # use this to not keep hashes around too long if we cant get their data
self.keepNewHash = 12
self.ignoredHashes = []
self.highFailureAmount = 7
self.communicatorThreads = 0
self.maxThreads = 75
self.processBlocksThreads = 0
self.lookupBlocksThreads = 0
self.blocksProcessing = [] # list of blocks currently processing, to avoid trying a block twice at once in 2 seperate threads
self.peerStatus = {} # network actions (active requests) for peers used mainly to prevent conflicting actions in threads
self.communicatorTimers = {} # communicator timers, name: rate (in seconds)
self.communicatorTimerCounts = {}
self.communicatorTimerFuncs = {}
self.registerTimer('blockProcess', 20)
self.registerTimer('highFailure', 10)
self.registerTimer('heartBeat', 10)
self.registerTimer('pex', 120)
logger.debug('Communicator debugging enabled.')
with open('data/hs/hostname', 'r') as torID:
todID = torID.read()
apiRunningCheckRate = 10
apiRunningCheckCount = 0
self.peerData = {} # Session data for peers (recent reachability, speed, etc)
if os.path.exists(self._core.queueDB):
self._core.clearDaemonQueue()
# Loads in and starts the enabled plugins
plugins.reload()
# Print nice header thing :)
if config.get('general.display_header', True):
self.header()
while True:
command = self._core.daemonQueue()
# Process blocks based on a timer
self.timerTick()
# TODO: migrate below if statements to be own functions which are called in the above timerTick() function
if self.communicatorTimers['highFailure'] == self.communicatorTimerCounts['highFailure']:
self.communicatorTimerCounts['highFailure'] = 0
for i in self.peerData:
if self.peerData[i]['failCount'] >= self.highFailureAmount:
self.peerData[i]['failCount'] -= 1
if self.communicatorTimers['pex'] == self.communicatorTimerCounts['pex']:
pT1 = threading.Thread(target=self.getNewPeers, name="pT1")
pT1.start()
pT2 = threading.Thread(target=self.getNewPeers, name="pT2")
pT2.start()
self.communicatorTimerCounts['pex'] = 0# TODO: do not reset timer if low peer count
if self.communicatorTimers['heartBeat'] == self.communicatorTimerCounts['heartBeat']:
logger.debug('Communicator heartbeat')
self.communicatorTimerCounts['heartBeat'] = 0
if self.communicatorTimers['blockProcess'] == self.communicatorTimerCounts['blockProcess']:
lT1 = threading.Thread(target=self.lookupBlocks, name="lt1", args=(True,))
lT2 = threading.Thread(target=self.lookupBlocks, name="lt2", args=(True,))
lT3 = threading.Thread(target=self.lookupBlocks, name="lt3", args=(True,))
lT4 = threading.Thread(target=self.lookupBlocks, name="lt4", args=(True,))
pbT1 = threading.Thread(target=self.processBlocks, name='pbT1', args=(True,))
pbT2 = threading.Thread(target=self.processBlocks, name='pbT2', args=(True,))
pbT3 = threading.Thread(target=self.processBlocks, name='pbT3', args=(True,))
pbT4 = threading.Thread(target=self.processBlocks, name='pbT4', args=(True,))
if (self.maxThreads - 8) >= threading.active_count():
lT1.start()
lT2.start()
lT3.start()
lT4.start()
pbT1.start()
pbT2.start()
pbT3.start()
pbT4.start()
self.communicatorTimerCounts['blockProcess'] = 0
else:
logger.debug(threading.active_count())
logger.debug('Too many threads.')
if command != False:
if command[0] == 'shutdown':
logger.info('Daemon received exit command.', timestamp=True)
break
elif command[0] == 'announceNode':
announceAttempts = 3
announceAttemptCount = 0
announceVal = False
logger.info('Announcing node to %s...' % command[1], timestamp=True)
while not announceVal:
announceAttemptCount += 1
announceVal = self.performGet('announce', command[1], data=self._core.hsAdder.replace('\n', ''), skipHighFailureAddress=True)
# logger.info(announceVal)
if announceAttemptCount >= announceAttempts:
logger.warn('Unable to announce to %s' % command[1])
break
elif command[0] == 'runCheck':
logger.debug('Status check; looks good.')
open('data/.runcheck', 'w+').close()
elif command[0] == 'kex':
self.pexCount = pexTimer - 1
elif command[0] == 'event':
# todo
pass
elif command[0] == 'checkCallbacks':
try:
data = json.loads(command[1])
logger.info('Checking for callbacks with connection %s...' % data['id'])
self.check_callbacks(data, config.get('general.dc_execcallbacks', True))
events.event('incoming_direct_connection', data = {'callback' : True, 'communicator' : self, 'data' : data})
except Exception as e:
logger.error('Failed to interpret callbacks for checking', e)
elif command[0] == 'incomingDirectConnection':
try:
data = json.loads(command[1])
logger.info('Handling incoming connection %s...' % data['id'])
self.incoming_direct_connection(data)
events.event('incoming_direct_connection', data = {'callback' : False, 'communicator' : self, 'data' : data})
except Exception as e:
logger.error('Failed to handle callbacks for checking', e)
apiRunningCheckCount += 1
# check if local API is up
if apiRunningCheckCount > apiRunningCheckRate:
if self._core._utils.localCommand('ping') != 'pong':
for i in range(4):
if self._utils.localCommand('ping') == 'pong':
apiRunningCheckCount = 0
break # break for loop
time.sleep(1)
else:
# This executes if the api is NOT detected to be running
logger.error('Daemon detected API crash (or otherwise unable to reach API after long time), stopping...')
break # break main daemon loop
apiRunningCheckCount = 0
time.sleep(1)
self._netController.killTor()
return
future_callbacks = {}
connection_handlers = {}
id_peer_cache = {}
def registerTimer(self, timerName, rate, timerFunc=None):
'''
Register a communicator timer
'''
self.communicatorTimers[timerName] = rate
self.communicatorTimerCounts[timerName] = 0
self.communicatorTimerFuncs[timerName] = timerFunc
def timerTick(self):
'''
Increments timers "ticks" and calls funcs if applicable
'''
tName = ''
for i in self.communicatorTimers.items():
tName = i[0]
self.communicatorTimerCounts[tName] += 1
if self.communicatorTimerCounts[tName] == self.communicatorTimers[tName]:
try:
self.communicatorTimerFuncs[tName]()
except TypeError:
pass
else:
self.communicatorTimerCounts[tName] = 0
def get_connection_handlers(self, name = None):
'''
Returns a list of callback handlers by name, or, if name is None, it returns all handlers.
'''
if name is None:
return self.connection_handlers
elif name in self.connection_handlers:
return self.connection_handlers[name]
else:
return list()
def add_connection_handler(self, name, handler):
'''
Adds a function to be called when an connection that is NOT a callback is received.
Takes in the name of the communication type and the handler as input
'''
if not name in self.connection_handlers:
self.connection_handlers[name] = list()
self.connection_handlers[name].append(handler)
return
def remove_connection_handler(self, name, handler = None):
'''
Removes a connection handler if specified, or removes all by name
'''
if handler is None:
if name in self.connection_handlers:
self.connection_handlers[name].remove(handler)
elif name in self.connection_handlers:
del self.connection_handlers[name]
return
def set_callback(self, identifier, callback):
'''
(Over)writes a callback by communication identifier
'''
if not callback is None:
self.future_callbacks[identifier] = callback
return True
return False
def unset_callback(self, identifier):
'''
Unsets a callback by communication identifier, if set
'''
if identifier in future_callbacks:
del self.future_callbacks[identifier]
return True
return False
def get_callback(self, identifier):
'''
Returns a callback by communication identifier if set, or None
'''
if identifier in self.future_callbacks:
return self.future_callbacks[id]
return None
def direct_connect(self, peer, data = None, callback = None, log = True):
'''
Communicates something directly with the client
- `peer` should obviously be the peer id to request.
- `data` should be a dict (NOT str), with the parameter "type"
ex. {'type': 'sendMessage', 'content': 'hey, this is a dm'}
In that dict, the key 'token' must NEVER be set. If it is, it will
be overwritten.
- if `callback` is set to a function, it will call that function
back if/when the client the request is sent to decides to respond.
Do NOT depend on a response, because users can configure their
clients not to respond to this type of request.
- `log` is set to True by default-- what this does is log the
request for debug purposes. Should be False for sensitive actions.
'''
# TODO: Timing attack prevention
try:
# does not need to be secure random, only used for keeping track of async responses
# Actually, on second thought, it does need to be secure random. Otherwise, if it is predictable, someone could trigger arbitrary callbacks that have been saved on the local node, wrecking all kinds of havoc. Better just to keep it secure random.
identifier = self._utils.token(32)
if 'id' in data:
identifier = data['id']
if not identifier in id_peer_cache:
id_peer_cache[identifier] = peer
if type(data) == str:
# if someone inputs a string instead of a dict, it will assume it's the type
data = {'type' : data}
data['id'] = identifier
data['token'] = '' # later put PoW stuff here or whatever is needed
data_str = json.dumps(data)
events.event('outgoing_direct_connection', data = {'callback' : True, 'communicator' : self, 'data' : data, 'id' : identifier, 'token' : token, 'peer' : peer, 'callback' : callback, 'log' : log})
logger.debug('Direct connection (identifier: "%s"): %s' % (identifier, data_str))
try:
self.performGet('directMessage', peer, data_str)
except:
logger.warn('Failed to connect to peer: "%s".' % str(peer))
return False
if not callback is None:
self.set_callback(identifier, callback)
return True
except Exception as e:
logger.warn('Unknown error, failed to execute direct connect (peer: "%s").' % str(peer), e)
return False
def direct_connect_response(self, identifier, data, peer = None, callback = None, log = True):
'''
Responds to a previous connection. Hostname will be pulled from id_peer_cache if not specified in `peer` parameter.
If yet another callback is requested, it can be put in the `callback` parameter.
'''
if config.get('general.dc_response', True):
data['id'] = identifier
data['sender'] = open('data/hs/hostname').read()
data['callback'] = True
if (origin is None) and (identifier in id_peer_cache):
origin = id_peer_cache[identifier]
if not identifier in id_peer_cache:
id_peer_cache[identifier] = peer
if origin is None:
logger.warn('Failed to identify peer for connection %s' % str(identifier))
return False
else:
return self.direct_connect(peer, data = data, callback = callback, log = log)
else:
logger.warn('Node tried to respond to direct connection id %s, but it was rejected due to `dc_response` restriction.' % str(identifier))
return False
def check_callbacks(self, data, execute = True, remove = True):
'''
Check if a callback is set, and if so, execute it
'''
try:
if type(data) is str:
data = json.loads(data)
if 'id' in data: # TODO: prevent enumeration, require extra PoW
identifier = data['id']
if identifier in self.future_callbacks:
if execute:
self.get_callback(identifier)(data)
logger.debug('Request callback "%s" executed.' % str(identifier))
if remove:
self.unset_callback(identifier)
return True
logger.warn('Unable to find request callback for ID "%s".' % str(identifier))
else:
logger.warn('Unable to identify callback request, `id` parameter missing: %s' % json.dumps(data))
except Exception as e:
logger.warn('Unknown error, failed to execute direct connection callback (peer: "%s").' % str(peer), e)
return False
def incoming_direct_connection(self, data):
'''
This code is run whenever there is a new incoming connection.
'''
if 'type' in data and data['type'] in self.connection_handlers:
for handler in self.get_connection_handlers(name):
handler(data)
return
def getNewPeers(self):
'''
Get new peers and ed25519 keys
'''
peersCheck = 1 # Amount of peers to ask for new peers + keys
peersChecked = 0
peerList = list(self._core.listAdders()) # random ordered list of peers
newKeys = []
newAdders = []
if len(peerList) > 0:
maxN = len(peerList) - 1
else:
peersCheck = 0
maxN = 0
if len(peerList) > peersCheck:
peersCheck = len(peerList)
while peersCheck > peersChecked:
#i = secrets.randbelow(maxN) # cant use prior to 3.6
i = random.randint(0, maxN)
try:
if self.peerStatusTaken(peerList[i], 'pex') or self.peerStatusTaken(peerList[i], 'kex'):
continue
except IndexError:
pass
logger.info('Using %s to find new peers...' % peerList[i], timestamp=True)
try:
newAdders = self.performGet('pex', peerList[i], skipHighFailureAddress=True)
if not newAdders is False: # keep the is False thing in there, it might not be bool
logger.debug('Attempting to merge address: %s' % str(newAdders))
self._utils.mergeAdders(newAdders)
except requests.exceptions.ConnectionError:
logger.info('%s connection failed' % peerList[i], timestamp=True)
continue
else:
try:
logger.info('Using %s to find new keys...' % peerList[i])
newKeys = self.performGet('kex', peerList[i], skipHighFailureAddress=True)
logger.debug('Attempting to merge pubkey: %s' % str(newKeys))
# TODO: Require keys to come with POW token (very large amount of POW)
self._utils.mergeKeys(newKeys)
except requests.exceptions.ConnectionError:
logger.info('%s connection failed' % peerList[i], timestamp=True)
continue
else:
peersChecked += 1
return
def lookupBlocks(self, isThread=False):
'''
Lookup blocks and merge new ones
'''
if isThread:
self.lookupBlocksThreads += 1
peerList = self._core.listAdders()
blockList = list()
for i in peerList:
if self.peerStatusTaken(i, 'getBlockHashes') or self.peerStatusTaken(i, 'getDBHash'):
continue
try:
if self.peerData[i]['failCount'] >= self.highFailureAmount:
continue
except KeyError:
pass
lastDB = self._core.getAddressInfo(i, 'DBHash')
if lastDB == None:
logger.debug('Fetching db hash from %s, no previous known.' % str(i))
else:
logger.debug('Fetching db hash from %s, %s last known' % (str(i), str(lastDB)))
currentDB = self.performGet('getDBHash', i)
if currentDB != False:
logger.debug('%s hash db (from request): %s' % (str(i), str(currentDB)))
else:
logger.warn('Failed to get hash db status for %s' % str(i))
if currentDB != False:
if lastDB != currentDB:
logger.debug('Fetching hash from %s - %s current hash.' % (str(i), currentDB))
try:
blockList.extend(self.performGet('getBlockHashes', i).split('\n'))
except TypeError:
logger.warn('Failed to get data hash from %s' % str(i))
self.peerData[i]['failCount'] -= 1
if self._utils.validateHash(currentDB):
self._core.setAddressInfo(i, "DBHash", currentDB)
if len(blockList) != 0:
pass
for i in blockList:
if len(i.strip()) == 0:
continue
try:
if self._utils.hasBlock(i):
continue
except:
logger.warn('Invalid hash') # TODO: move below validate hash check below
pass
if i in self.ignoredHashes:
continue
#logger.debug('Exchanged block (blockList): ' + i)
if not self._utils.validateHash(i):
# skip hash if it isn't valid
logger.warn('Hash %s is not valid' % str(i))
continue
else:
self.newHashes[i] = 0
logger.debug('Adding %s to hash database...' % str(i))
self._core.addToBlockDB(i)
self.lookupBlocksThreads -= 1
return
def processBlocks(self, isThread=False):
'''
Work with the block database and download any missing blocks
This is meant to be called from the communicator daemon on its timer.
'''
if isThread:
self.processBlocksThreads += 1
for i in self._core.getBlockList(unsaved = True):
if i != "":
if i in self.blocksProcessing or i in self.ignoredHashes:
#logger.debug('already processing ' + i)
continue
else:
self.blocksProcessing.append(i)
try:
self.newHashes[i]
except KeyError:
self.newHashes[i] = 0
# check if a new hash has been around too long, delete it from database and add it to ignore list
if self.newHashes[i] >= self.keepNewHash:
logger.warn('Ignoring block %s because it took to long to get valid data.' % str(i))
del self.newHashes[i]
self._core.removeBlock(i)
self.ignoredHashes.append(i)
continue
self.newHashes[i] += 1
logger.warn('Block is unsaved: %s' % str(i))
data = self.downloadBlock(i)
# if block was successfully gotten (hash already verified)
if data:
del self.newHashes[i] # remove from probation list
# deal with block metadata
blockContent = self._core.getData(i)
try:
blockContent = blockContent.encode()
except AttributeError:
pass
try:
#blockMetadata = json.loads(self._core.getData(i)).split('}')[0] + '}'
blockMetadata = json.loads(blockContent[:blockContent.find(b'\n')].decode())
try:
blockMeta2 = json.loads(blockMetadata['meta'])
except KeyError:
blockMeta2 = {'type': ''}
pass
blockContent = blockContent[blockContent.find(b'\n') + 1:]
try:
blockContent = blockContent.decode()
except AttributeError:
pass
if not self._crypto.verifyPow(blockContent, blockMeta2):
logger.warn("%s has invalid or insufficient proof of work token, deleting..." % str(i))
self._core.removeBlock(i)
continue
else:
if (('sig' in blockMetadata) and ('id' in blockMeta2)): # id doesn't exist in blockMeta2, so this won't workin the first place
#blockData = json.dumps(blockMetadata['meta']) + blockMetadata[blockMetadata.rfind(b'}') + 1:]
creator = self._utils.getPeerByHashId(blockMeta2['id'])
try:
creator = creator.decode()
except AttributeError:
pass
if self._core._crypto.edVerify(blockMetadata['meta'] + blockContent, creator, blockMetadata['sig'], encodedData=True):
logger.info('%s was signed' % str(i))
self._core.updateBlockInfo(i, 'sig', 'true')
else:
logger.warn('%s has an invalid signature' % str(i))
self._core.updateBlockInfo(i, 'sig', 'false')
try:
logger.info('Block type is %s' % str(blockMeta2['type']))
self._core.updateBlockInfo(i, 'dataType', blockMeta2['type'])
self.removeBlockFromProcessingList(i)
self.removeBlockFromProcessingList(i)
except KeyError:
logger.warn('Block has no type')
pass
except json.decoder.JSONDecodeError:
logger.warn('Could not decode block metadata')
self.removeBlockFromProcessingList(i)
self.processBlocksThreads -= 1
return
def removeBlockFromProcessingList(self, block):
'''
Remove a block from the processing list
'''
try:
self.blocksProcessing.remove(block)
except ValueError:
return False
else:
return True
def downloadBlock(self, hash, peerTries=3):
'''
Download a block from random order of peers
'''
retVal = False
peerList = self._core.listAdders()
blocks = ''
peerTryCount = 0
for i in peerList:
try:
if self.peerData[i]['failCount'] >= self.highFailureAmount:
continue
except KeyError:
pass
if peerTryCount >= peerTries:
break
hasher = hashlib.sha3_256()
data = self.performGet('getData', i, hash, skipHighFailureAddress=True)
if data == False or len(data) > 10000000 or data == '':
peerTryCount += 1
continue
try:
data = base64.b64decode(data)
except binascii.Error:
data = b''
hasher.update(data)
digest = hasher.hexdigest()
if type(digest) is bytes:
digest = digest.decode()
if digest == hash.strip():
self._core.setData(data)
logger.info('Successfully obtained data for %s' % str(hash), timestamp=True)
retVal = True
break
else:
logger.warn("Failed to validate %s -- hash calculated was %s" % (hash, digest))
peerTryCount += 1
return retVal
def urlencode(self, data):
'''
URL encodes the data
'''
return urllib.parse.quote_plus(data)
def performGet(self, action, peer, data=None, skipHighFailureAddress=False, selfCheck=True):
'''
Performs a request to a peer through Tor or i2p (currently only Tor)
'''
if not peer.endswith('.onion') and not peer.endswith('.onion/') and not peer.endswith('.b32.i2p'):
raise PeerError('Currently only Tor/i2p .onion/.b32.i2p peers are supported. You must manually specify .onion/.b32.i2p')
if len(self._core.hsAdder.strip()) == 0:
raise Exception("Could not perform self address check in performGet due to not knowing our address")
if selfCheck:
if peer.replace('/', '') == self._core.hsAdder:
logger.warn('Tried to performGet to own hidden service, but selfCheck was not set to false')
return
# Store peer in peerData dictionary (non permanent)
if not peer in self.peerData:
self.peerData[peer] = {'connectCount': 0, 'failCount': 0, 'lastConnectTime': self._utils.getEpoch()}
socksPort = sys.argv[2]
'''We use socks5h to use tor as DNS'''
if peer.endswith('onion'):
proxies = {'http': 'socks5h://127.0.0.1:' + str(socksPort), 'https': 'socks5h://127.0.0.1:' + str(socksPort)}
elif peer.endswith('b32.i2p'):
proxies = {'http': 'http://127.0.0.1:4444'}
headers = {'user-agent': 'PyOnionr'}
url = 'http://' + peer + '/public/?action=' + self.urlencode(action)
if data != None:
url = url + '&data=' + self.urlencode(data)
try:
if skipHighFailureAddress and self.peerData[peer]['failCount'] > self.highFailureAmount:
retData = False
logger.debug('Skipping %s because of high failure rate.' % peer)
else:
self.peerStatus[peer] = action
logger.debug('Contacting %s on port %s' % (peer, str(socksPort)))
try:
r = requests.get(url, headers=headers, proxies=proxies, allow_redirects=False, timeout=(15, 30))
except ValueError:
proxies = {'http': 'socks5://127.0.0.1:' + str(socksPort), 'https': 'socks5://127.0.0.1:' + str(socksPort)}
r = requests.get(url, headers=headers, proxies=proxies, allow_redirects=False, timeout=(15, 30))
retData = r.text
except requests.exceptions.RequestException as e:
logger.debug('%s failed with peer %s' % (action, peer))
logger.debug('Error: %s' % str(e))
retData = False
if not retData:
self.peerData[peer]['failCount'] += 1
else:
self.peerData[peer]['connectCount'] += 1
self.peerData[peer]['failCount'] -= 1
self.peerData[peer]['lastConnectTime'] = self._utils.getEpoch()
self._core.setAddressInfo(peer, 'lastConnect', self._utils.getEpoch())
return retData
def peerStatusTaken(self, peer, status):
'''
Returns if we are currently performing a specific action with a peer.
'''
try:
if self.peerStatus[peer] == status:
return True
except KeyError:
pass
return False
def header(self, message = logger.colors.fg.pink + logger.colors.bold + 'Onionr' + logger.colors.reset + logger.colors.fg.pink + ' has started.'):
if os.path.exists('static-data/header.txt'):
with open('static-data/header.txt', 'rb') as file:
# only to stdout, not file or log or anything
print(file.read().decode().replace('P', logger.colors.fg.pink).replace('W', logger.colors.reset + logger.colors.bold).replace('G', logger.colors.fg.green).replace('\n', logger.colors.reset + '\n'))
logger.info(logger.colors.fg.lightgreen + '-> ' + str(message) + logger.colors.reset + logger.colors.fg.lightgreen + ' <-\n')
shouldRun = False
debug = True
developmentMode = False
if config.get('general.dev_mode', True):
developmentMode = True
try:
if sys.argv[1] == 'run':
shouldRun = True
except IndexError:
pass
if shouldRun:
try:
OnionrCommunicate(debug, developmentMode)
except KeyboardInterrupt:
sys.exit(1)
pass

View File

@ -19,30 +19,49 @@
You should have received a copy of the GNU General Public License You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import sys, os, core, config, json, onionrblockapi as block, requests, time, logger, threading, onionrplugins as plugins, base64 import sys, os, core, config, json, requests, time, logger, threading, base64, onionr
import onionrexceptions import onionrexceptions, onionrpeers, onionrevents as events, onionrplugins as plugins, onionrblockapi as block
import onionrdaemontools
from defusedxml import minidom from defusedxml import minidom
class OnionrCommunicatorDaemon: class OnionrCommunicatorDaemon:
def __init__(self, debug, developmentMode): def __init__(self, debug, developmentMode):
logger.warn('New (unstable) communicator is being used.')
# list of timer instances
self.timers = [] self.timers = []
self._core = core.Core(torPort=sys.argv[2])
# initalize core with Tor socks port being 3rd argument
self.proxyPort = sys.argv[2]
self._core = core.Core(torPort=self.proxyPort)
# intalize NIST beacon salt and time
self.nistSaltTimestamp = 0 self.nistSaltTimestamp = 0
self.powSalt = 0 self.powSalt = 0
self.blockToUpload = ''
# loop time.sleep delay in seconds
self.delay = 1 self.delay = 1
self.proxyPort = sys.argv[2]
# time app started running for info/statistics purposes
self.startTime = self._core._utils.getEpoch() self.startTime = self._core._utils.getEpoch()
# lists of connected peers and peers we know we can't reach currently
self.onlinePeers = [] self.onlinePeers = []
self.offlinePeers = [] self.offlinePeers = []
self.peerProfiles = [] # list of peer's profiles (onionrpeers.PeerProfile instances)
# amount of threads running by name, used to prevent too many
self.threadCounts = {} self.threadCounts = {}
# set true when shutdown command recieved
self.shutdown = False self.shutdown = False
self.blockQueue = [] # list of new blocks to download # list of new blocks to download, added to when new block lists are fetched from peers
self.blockQueue = []
# list of blocks currently downloading, avoid s
self.currentDownloading = []
# Clear the daemon queue for any dead messages # Clear the daemon queue for any dead messages
if os.path.exists(self._core.queueDB): if os.path.exists(self._core.queueDB):
@ -51,39 +70,62 @@ class OnionrCommunicatorDaemon:
# Loads in and starts the enabled plugins # Loads in and starts the enabled plugins
plugins.reload() plugins.reload()
# Print nice header thing :) # daemon tools are misc daemon functions, e.g. announce to online peers
if config.get('general.display_header', True): # intended only for use by OnionrCommunicatorDaemon
self.header() #self.daemonTools = onionrdaemontools.DaemonTools(self)
self.daemonTools = onionrdaemontools.DaemonTools(self)
if debug or developmentMode: if debug or developmentMode:
OnionrCommunicatorTimers(self, self.heartbeat, 10) OnionrCommunicatorTimers(self, self.heartbeat, 10)
self.getOnlinePeers() # Print nice header thing :)
if config.get('general.display_header', True) and not self.shutdown:
self.header()
# Set timers, function reference, seconds
# requiresPeer True means the timer function won't fire if we have no connected peers
# TODO: make some of these timer counts configurable
OnionrCommunicatorTimers(self, self.daemonCommands, 5) OnionrCommunicatorTimers(self, self.daemonCommands, 5)
OnionrCommunicatorTimers(self, self.detectAPICrash, 5) OnionrCommunicatorTimers(self, self.detectAPICrash, 5)
OnionrCommunicatorTimers(self, self.getOnlinePeers, 60) peerPoolTimer = OnionrCommunicatorTimers(self, self.getOnlinePeers, 60)
OnionrCommunicatorTimers(self, self.lookupBlocks, 7) OnionrCommunicatorTimers(self, self.lookupBlocks, 7, requiresPeer=True, maxThreads=1)
OnionrCommunicatorTimers(self, self.getBlocks, 10) OnionrCommunicatorTimers(self, self.getBlocks, 10, requiresPeer=True)
OnionrCommunicatorTimers(self, self.clearOfflinePeer, 120) OnionrCommunicatorTimers(self, self.clearOfflinePeer, 58)
OnionrCommunicatorTimers(self, self.lookupKeys, 125) OnionrCommunicatorTimers(self, self.lookupKeys, 60, requiresPeer=True)
OnionrCommunicatorTimers(self, self.lookupAdders, 600) OnionrCommunicatorTimers(self, self.lookupAdders, 60, requiresPeer=True)
announceTimer = OnionrCommunicatorTimers(self, self.daemonTools.announceNode, 305, requiresPeer=True, maxThreads=1)
cleanupTimer = OnionrCommunicatorTimers(self, self.peerCleanup, 300, requiresPeer=True)
# Main daemon loop, mainly for calling timers, do not do any complex operations here # set loop to execute instantly to load up peer pool (replaced old pool init wait)
peerPoolTimer.count = (peerPoolTimer.frequency - 1)
cleanupTimer.count = (cleanupTimer.frequency - 60)
announceTimer.count = (cleanupTimer.frequency - 60)
# Main daemon loop, mainly for calling timers, don't do any complex operations here to avoid locking
try:
while not self.shutdown: while not self.shutdown:
for i in self.timers: for i in self.timers:
if self.shutdown:
break
i.processTimer() i.processTimer()
time.sleep(self.delay) time.sleep(self.delay)
except KeyboardInterrupt:
self.shutdown = True
pass
logger.info('Goodbye.') logger.info('Goodbye.')
self._core._utils.localCommand('shutdown')
time.sleep(0.5)
def lookupKeys(self): def lookupKeys(self):
'''Lookup new keys''' '''Lookup new keys'''
logger.info('LOOKING UP NEW KEYS') logger.debug('Looking up new keys...')
tryAmount = 1 tryAmount = 1
for i in range(tryAmount): for i in range(tryAmount):
# Download new key list from random online peers
peer = self.pickOnlinePeer() peer = self.pickOnlinePeer()
newKeys = self.peerAction(peer, action='kex') newKeys = self.peerAction(peer, action='kex')
self._core._utils.mergeKeys(newKeys) self._core._utils.mergeKeys(newKeys)
self.decrementThreadCount('lookupKeys') self.decrementThreadCount('lookupKeys')
return return
@ -92,64 +134,108 @@ class OnionrCommunicatorDaemon:
logger.info('LOOKING UP NEW ADDRESSES') logger.info('LOOKING UP NEW ADDRESSES')
tryAmount = 1 tryAmount = 1
for i in range(tryAmount): for i in range(tryAmount):
# Download new peer address list from random online peers
peer = self.pickOnlinePeer() peer = self.pickOnlinePeer()
newAdders = self.peerAction(peer, action='pex') newAdders = self.peerAction(peer, action='pex')
self._core._utils.mergeAdders(newAdders) self._core._utils.mergeAdders(newAdders)
self.decrementThreadCount('lookupAdders')
self.decrementThreadCount('lookupKeys')
def lookupBlocks(self): def lookupBlocks(self):
'''Lookup new blocks''' '''Lookup new blocks & add them to download queue'''
logger.info('LOOKING UP NEW BLOCKS') logger.info('LOOKING UP NEW BLOCKS')
tryAmount = 2 tryAmount = 2
newBlocks = '' newBlocks = ''
existingBlocks = self._core.getBlockList()
triedPeers = [] # list of peers we've tried this time around
for i in range(tryAmount): for i in range(tryAmount):
peer = self.pickOnlinePeer() peer = self.pickOnlinePeer() # select random online peer
newDBHash = self.peerAction(peer, 'getDBHash') # if we've already tried all the online peers this time around, stop
if newDBHash == False: if peer in triedPeers:
if len(self.onlinePeers) == len(triedPeers):
break
else:
continue continue
newDBHash = self.peerAction(peer, 'getDBHash') # get their db hash
if newDBHash == False:
continue # if request failed, restart loop (peer is added to offline peers automatically)
triedPeers.append(peer)
if newDBHash != self._core.getAddressInfo(peer, 'DBHash'): if newDBHash != self._core.getAddressInfo(peer, 'DBHash'):
self._core.setAddressInfo(peer, 'DBHash', newDBHash) self._core.setAddressInfo(peer, 'DBHash', newDBHash)
try:
newBlocks = self.peerAction(peer, 'getBlockHashes') newBlocks = self.peerAction(peer, 'getBlockHashes')
except Exception as error:
logger.warn("could not get new blocks with " + peer, error=error)
newBlocks = False
if newBlocks != False: if newBlocks != False:
# if request was a success # if request was a success
for i in newBlocks.split('\n'): for i in newBlocks.split('\n'):
if self._core._utils.validateHash(i): if self._core._utils.validateHash(i):
# if newline seperated string is valid hash # if newline seperated string is valid hash
if not os.path.exists('data/blocks/' + i + '.db'): if not i in existingBlocks:
# if block does not exist on disk and is not already in block queue # if block does not exist on disk and is not already in block queue
if i not in self.blockQueue: if i not in self.blockQueue and not self._core._blacklist.inBlacklist(i):
self.blockQueue.append(i) self.blockQueue.append(i)
self.decrementThreadCount('lookupBlocks') self.decrementThreadCount('lookupBlocks')
return return
def getBlocks(self): def getBlocks(self):
'''download new blocks''' '''download new blocks in queue'''
for blockHash in self.blockQueue: for blockHash in self.blockQueue:
logger.info("ATTEMPTING TO DOWNLOAD " + blockHash) if self.shutdown:
content = self.peerAction(self.pickOnlinePeer(), 'getData', data=blockHash) break
if blockHash in self.currentDownloading:
logger.debug('ALREADY DOWNLOADING ' + blockHash)
continue
if blockHash in self._core.getBlockList():
logger.debug('%s is already saved' % (blockHash,))
self.blockQueue.remove(blockHash)
continue
self.currentDownloading.append(blockHash)
logger.info("Attempting to download %s..." % blockHash)
peerUsed = self.pickOnlinePeer()
content = self.peerAction(peerUsed, 'getData', data=blockHash) # block content from random peer (includes metadata)
if content != False: if content != False:
try: try:
content = content.encode() content = content.encode()
except AttributeError: except AttributeError:
pass pass
content = base64.b64decode(content) content = base64.b64decode(content) # content is base64 encoded in transport
if self._core._crypto.sha3Hash(content) == blockHash: realHash = self._core._crypto.sha3Hash(content)
try:
realHash = realHash.decode() # bytes on some versions for some reason
except AttributeError:
pass
if realHash == blockHash:
content = content.decode() # decode here because sha3Hash needs bytes above content = content.decode() # decode here because sha3Hash needs bytes above
metas = self._core._utils.getBlockMetadataFromData(content) # returns tuple(metadata, meta), meta is also in metadata metas = self._core._utils.getBlockMetadataFromData(content) # returns tuple(metadata, meta), meta is also in metadata
metadata = metas[0] metadata = metas[0]
meta = metas[1] #meta = metas[1]
if self._core._utils.validateMetadata(metadata): if self._core._utils.validateMetadata(metadata, metas[2]): # check if metadata is valid, and verify nonce
if self._core._crypto.verifyPow(metas[2], metadata): if self._core._crypto.verifyPow(content): # check if POW is enough/correct
logger.info('Block passed proof, saving.') logger.info('Block passed proof, saving.')
self._core.setData(content) self._core.setData(content)
self._core.addToBlockDB(blockHash, dataSaved=True) self._core.addToBlockDB(blockHash, dataSaved=True)
self._core._utils.processBlockMetadata(blockHash) # caches block metadata values to block database
else: else:
logger.warn('POW failed for block ' + blockHash) logger.warn('POW failed for block ' + blockHash)
else: else:
logger.warn('Metadata for ' + blockHash + ' is invalid.') if self._core._blacklist.inBlacklist(realHash):
self.blockQueue.remove(blockHash) logger.warn('%s is blacklisted' % (realHash,))
else: else:
logger.warn('Block hash validation failed for ' + blockHash + ' got ' + self._core._crypto.sha3Hash(content)) logger.warn('Metadata for ' + blockHash + ' is invalid.')
self._core._blacklist.addToDB(blockHash)
else:
# if block didn't meet expected hash
tempHash = self._core._crypto.sha3Hash(content) # lazy hack, TODO use var
try:
tempHash = tempHash.decode()
except AttributeError:
pass
# Punish peer for sharing invalid block (not always malicious, but is bad regardless)
onionrpeers.PeerProfiles(peerUsed, self._core).addScore(-50)
logger.warn('Block hash validation failed for ' + blockHash + ' got ' + tempHash)
self.blockQueue.remove(blockHash) # remove from block queue both if success or false
self.currentDownloading.remove(blockHash)
self.decrementThreadCount('getBlocks') self.decrementThreadCount('getBlocks')
return return
@ -184,20 +270,36 @@ class OnionrCommunicatorDaemon:
except IndexError: except IndexError:
pass pass
else: else:
logger.debug('removed ' + removed + ' from offline list to try them again.') logger.debug('Removed ' + removed + ' from offline list, will try them again.')
self.decrementThreadCount('clearOfflinePeer') self.decrementThreadCount('clearOfflinePeer')
def getOnlinePeers(self): def getOnlinePeers(self):
'''Manages the self.onlinePeers attribute list''' '''Manages the self.onlinePeers attribute list, connects to more peers if we have none connected'''
logger.info('Refreshing peer pool.') logger.info('Refreshing peer pool.')
maxPeers = 4 maxPeers = 6
needed = maxPeers - len(self.onlinePeers) needed = maxPeers - len(self.onlinePeers)
for i in range(needed): for i in range(needed):
if len(self.onlinePeers) == 0:
self.connectNewPeer(useBootstrap=True)
else:
self.connectNewPeer() self.connectNewPeer()
if self.shutdown:
break
else:
if len(self.onlinePeers) == 0:
logger.warn('Could not connect to any peer.')
self.decrementThreadCount('getOnlinePeers') self.decrementThreadCount('getOnlinePeers')
def connectNewPeer(self, peer=''): def addBootstrapListToPeerList(self, peerList):
'''Add the bootstrap list to the peer list (no duplicates)'''
for i in self._core.bootstrapList:
if i not in peerList and i not in self.offlinePeers and i != self._core.hsAddress:
peerList.append(i)
self._core.addAddress(i)
def connectNewPeer(self, peer='', useBootstrap=False):
'''Adds a new random online peer to self.onlinePeers''' '''Adds a new random online peer to self.onlinePeers'''
retData = False retData = False
tried = self.offlinePeers tried = self.offlinePeers
@ -209,32 +311,52 @@ class OnionrCommunicatorDaemon:
else: else:
peerList = self._core.listAdders() peerList = self._core.listAdders()
if len(peerList) == 0: peerList = onionrpeers.getScoreSortedPeerList(self._core)
peerList.extend(self._core.bootstrapList)
if len(peerList) == 0 or useBootstrap:
# Avoid duplicating bootstrap addresses in peerList
self.addBootstrapListToPeerList(peerList)
for address in peerList: for address in peerList:
if not config.get('tor.v3onions') and len(address) == 62:
continue
if len(address) == 0 or address in tried or address in self.onlinePeers: if len(address) == 0 or address in tried or address in self.onlinePeers:
continue continue
if self.shutdown:
return
if self.peerAction(address, 'ping') == 'pong!': if self.peerAction(address, 'ping') == 'pong!':
logger.info('connected to ' + address) logger.info('Connected to ' + address)
time.sleep(0.1)
if address not in self.onlinePeers:
self.onlinePeers.append(address) self.onlinePeers.append(address)
retData = address retData = address
# add peer to profile list if they're not in it
for profile in self.peerProfiles:
if profile.address == address:
break
else:
self.peerProfiles.append(onionrpeers.PeerProfiles(address, self._core))
break break
else: else:
tried.append(address) tried.append(address)
logger.debug('failed to connect to ' + address) logger.debug('Failed to connect to ' + address)
else:
if len(self.onlinePeers) == 0:
logger.warn('Could not connect to any peer')
return retData return retData
def peerCleanup(self):
'''This just calls onionrpeers.cleanupPeers, which removes dead or bad peers (offline too long, too slow)'''
onionrpeers.peerCleanup(self._core)
self.decrementThreadCount('peerCleanup')
def printOnlinePeers(self): def printOnlinePeers(self):
'''logs online peer list''' '''logs online peer list'''
if len(self.onlinePeers) == 0: if len(self.onlinePeers) == 0:
logger.warn('No online peers') logger.warn('No online peers')
return else:
logger.info('Online peers:')
for i in self.onlinePeers: for i in self.onlinePeers:
logger.info(self.onlinePeers[i]) score = str(self.getPeerProfileInstance(i).score)
logger.info(i + ', score: ' + score)
def peerAction(self, peer, action, data=''): def peerAction(self, peer, action, data=''):
'''Perform a get request to a peer''' '''Perform a get request to a peer'''
@ -244,13 +366,33 @@ class OnionrCommunicatorDaemon:
url = 'http://' + peer + '/public/?action=' + action url = 'http://' + peer + '/public/?action=' + action
if len(data) > 0: if len(data) > 0:
url += '&data=' + data url += '&data=' + data
self._core.setAddressInfo(peer, 'lastConnectAttempt', self._core._utils.getEpoch()) # mark the time we're trying to request this peer
retData = self._core._utils.doGetRequest(url, port=self.proxyPort) retData = self._core._utils.doGetRequest(url, port=self.proxyPort)
# if request failed, (error), mark peer offline
if retData == False: if retData == False:
try: try:
self.getPeerProfileInstance(peer).addScore(-10)
self.onlinePeers.remove(peer) self.onlinePeers.remove(peer)
self.getOnlinePeers() # Will only add a new peer to pool if needed self.getOnlinePeers() # Will only add a new peer to pool if needed
except ValueError: except ValueError:
pass pass
else:
self._core.setAddressInfo(peer, 'lastConnect', self._core._utils.getEpoch())
self.getPeerProfileInstance(peer).addScore(1)
return retData
def getPeerProfileInstance(self, peer):
'''Gets a peer profile instance from the list of profiles, by address name'''
for i in self.peerProfiles:
# if the peer's profile is already loaded, return that
if i.address == peer:
retData = i
break
else:
# if the peer's profile is not loaded, return a new one. connectNewPeer adds it the list on connect
retData = onionrpeers.PeerProfiles(peer, self._core)
return retData return retData
def heartbeat(self): def heartbeat(self):
@ -264,6 +406,8 @@ class OnionrCommunicatorDaemon:
cmd = self._core.daemonQueue() cmd = self._core.daemonQueue()
if cmd is not False: if cmd is not False:
events.event('daemon_command', onionr = None, data = {'cmd' : cmd})
if cmd[0] == 'shutdown': if cmd[0] == 'shutdown':
self.shutdown = True self.shutdown = True
elif cmd[0] == 'announceNode': elif cmd[0] == 'announceNode':
@ -273,23 +417,50 @@ class OnionrCommunicatorDaemon:
open('data/.runcheck', 'w+').close() open('data/.runcheck', 'w+').close()
elif cmd[0] == 'connectedPeers': elif cmd[0] == 'connectedPeers':
self.printOnlinePeers() self.printOnlinePeers()
elif cmd[0] == 'kex':
for i in self.timers:
if i.timerFunction.__name__ == 'lookupKeys':
i.count = (i.frequency - 1)
elif cmd[0] == 'pex':
for i in self.timers:
if i.timerFunction.__name__ == 'lookupAdders':
i.count = (i.frequency - 1)
elif cmd[0] == 'uploadBlock':
self.blockToUpload = cmd[1]
threading.Thread(target=self.uploadBlock).start()
else: else:
logger.info('Recieved daemonQueue command:' + cmd[0]) logger.info('Recieved daemonQueue command:' + cmd[0])
self.decrementThreadCount('daemonCommands') self.decrementThreadCount('daemonCommands')
def uploadBlock(self):
'''Upload our block to a few peers'''
# when inserting a block, we try to upload it to a few peers to add some deniability
triedPeers = []
if not self._core._utils.validateHash(self.blockToUpload):
logger.warn('Requested to upload invalid block')
return
for i in range(max(len(self.onlinePeers), 2)):
peer = self.pickOnlinePeer()
if peer in triedPeers:
continue
triedPeers.append(peer)
url = 'http://' + peer + '/public/upload/'
data = {'block': block.Block(self.blockToUpload).getRaw()}
proxyType = ''
if peer.endswith('.onion'):
proxyType = 'tor'
elif peer.endswith('.i2p'):
proxyType = 'i2p'
logger.info("Uploading block")
self._core._utils.doPostRequest(url, data=data, proxyType=proxyType)
def announce(self, peer): def announce(self, peer):
'''Announce to peers''' '''Announce to peers our address'''
announceCount = 0 if self.daemonTools.announceNode():
announceAmount = 2
for peer in self._core.listAdders():
announceCount += 1
if self.peerAction(peer, 'announce', self._core.hsAdder) == 'Success':
logger.info('Successfully introduced node to ' + peer) logger.info('Successfully introduced node to ' + peer)
break
else: else:
if announceCount == announceAmount: logger.warn('Could not introduce node.')
logger.warn('Could not introduce node. Try again soon')
break
def detectAPICrash(self): def detectAPICrash(self):
'''exit if the api server crashes/stops''' '''exit if the api server crashes/stops'''
@ -300,6 +471,7 @@ class OnionrCommunicatorDaemon:
time.sleep(1) time.sleep(1)
else: else:
# This executes if the api is NOT detected to be running # This executes if the api is NOT detected to be running
events.event('daemon_crash', onionr = None, data = {})
logger.error('Daemon detected API crash (or otherwise unable to reach API after long time), stopping...') logger.error('Daemon detected API crash (or otherwise unable to reach API after long time), stopping...')
self.shutdown = True self.shutdown = True
self.decrementThreadCount('detectAPICrash') self.decrementThreadCount('detectAPICrash')
@ -308,15 +480,16 @@ class OnionrCommunicatorDaemon:
if os.path.exists('static-data/header.txt'): if os.path.exists('static-data/header.txt'):
with open('static-data/header.txt', 'rb') as file: with open('static-data/header.txt', 'rb') as file:
# only to stdout, not file or log or anything # only to stdout, not file or log or anything
print(file.read().decode().replace('P', logger.colors.fg.pink).replace('W', logger.colors.reset + logger.colors.bold).replace('G', logger.colors.fg.green).replace('\n', logger.colors.reset + '\n')) sys.stderr.write(file.read().decode().replace('P', logger.colors.fg.pink).replace('W', logger.colors.reset + logger.colors.bold).replace('G', logger.colors.fg.green).replace('\n', logger.colors.reset + '\n').replace('B', logger.colors.bold).replace('V', onionr.ONIONR_VERSION))
logger.info(logger.colors.fg.lightgreen + '-> ' + str(message) + logger.colors.reset + logger.colors.fg.lightgreen + ' <-\n') logger.info(logger.colors.fg.lightgreen + '-> ' + str(message) + logger.colors.reset + logger.colors.fg.lightgreen + ' <-\n')
class OnionrCommunicatorTimers: class OnionrCommunicatorTimers:
def __init__(self, daemonInstance, timerFunction, frequency, makeThread=True, threadAmount=1, maxThreads=5): def __init__(self, daemonInstance, timerFunction, frequency, makeThread=True, threadAmount=1, maxThreads=5, requiresPeer=False):
self.timerFunction = timerFunction self.timerFunction = timerFunction
self.frequency = frequency self.frequency = frequency
self.threadAmount = threadAmount self.threadAmount = threadAmount
self.makeThread = makeThread self.makeThread = makeThread
self.requiresPeer = requiresPeer
self.daemonInstance = daemonInstance self.daemonInstance = daemonInstance
self.maxThreads = maxThreads self.maxThreads = maxThreads
self._core = self.daemonInstance._core self._core = self.daemonInstance._core
@ -325,13 +498,21 @@ class OnionrCommunicatorTimers:
self.count = 0 self.count = 0
def processTimer(self): def processTimer(self):
self.count += 1
# mark how many instances of a thread we have (decremented at thread end)
try: try:
self.daemonInstance.threadCounts[self.timerFunction.__name__] self.daemonInstance.threadCounts[self.timerFunction.__name__]
except KeyError: except KeyError:
self.daemonInstance.threadCounts[self.timerFunction.__name__] = 0 self.daemonInstance.threadCounts[self.timerFunction.__name__] = 0
# execute thread if it is time, and we are not missing *required* online peer
if self.count == self.frequency: if self.count == self.frequency:
try:
if self.requiresPeer and len(self.daemonInstance.onlinePeers) == 0:
raise onionrexceptions.OnlinePeerNeeded
except onionrexceptions.OnlinePeerNeeded:
pass
else:
if self.makeThread: if self.makeThread:
for i in range(self.threadAmount): for i in range(self.threadAmount):
if self.daemonInstance.threadCounts[self.timerFunction.__name__] >= self.maxThreads: if self.daemonInstance.threadCounts[self.timerFunction.__name__] >= self.maxThreads:
@ -342,8 +523,8 @@ class OnionrCommunicatorTimers:
newThread.start() newThread.start()
else: else:
self.timerFunction() self.timerFunction()
self.count = 0 self.count = -1 # negative 1 because its incremented at bottom
self.count += 1
shouldRun = False shouldRun = False
debug = True debug = True
@ -358,8 +539,5 @@ except IndexError:
if shouldRun: if shouldRun:
try: try:
OnionrCommunicatorDaemon(debug, developmentMode) OnionrCommunicatorDaemon(debug, developmentMode)
except KeyboardInterrupt:
sys.exit(1)
pass
except Exception as e: except Exception as e:
logger.error('Error occured in Communicator', error = e, timestamp = False) logger.error('Error occured in Communicator', error = e, timestamp = False)

View File

@ -21,7 +21,8 @@ import sqlite3, os, sys, time, math, base64, tarfile, getpass, simplecrypt, hash
from onionrblockapi import Block from onionrblockapi import Block
import onionrutils, onionrcrypto, onionrproofs, onionrevents as events, onionrexceptions, onionrvalues import onionrutils, onionrcrypto, onionrproofs, onionrevents as events, onionrexceptions, onionrvalues
import onionrblacklist
import dbcreator
if sys.version_info < (3, 6): if sys.version_info < (3, 6):
try: try:
import sha3 import sha3
@ -40,11 +41,15 @@ class Core:
self.blockDB = 'data/blocks.db' self.blockDB = 'data/blocks.db'
self.blockDataLocation = 'data/blocks/' self.blockDataLocation = 'data/blocks/'
self.addressDB = 'data/address.db' self.addressDB = 'data/address.db'
self.hsAdder = '' self.hsAddress = ''
self.bootstrapFileLocation = 'static-data/bootstrap-nodes.txt' self.bootstrapFileLocation = 'static-data/bootstrap-nodes.txt'
self.bootstrapList = [] self.bootstrapList = []
self.requirements = onionrvalues.OnionrValues() self.requirements = onionrvalues.OnionrValues()
self.torPort = torPort
self.dataNonceFile = 'data/block-nonces.dat'
self.dbCreate = dbcreator.DBCreator(self)
self.usageFile = 'data/disk-usage.txt'
if not os.path.exists('data/'): if not os.path.exists('data/'):
os.mkdir('data/') os.mkdir('data/')
@ -55,7 +60,7 @@ class Core:
if os.path.exists('data/hs/hostname'): if os.path.exists('data/hs/hostname'):
with open('data/hs/hostname', 'r') as hs: with open('data/hs/hostname', 'r') as hs:
self.hsAdder = hs.read() self.hsAddress = hs.read().strip()
# Load bootstrap address list # Load bootstrap address list
if os.path.exists(self.bootstrapFileLocation): if os.path.exists(self.bootstrapFileLocation):
@ -69,6 +74,7 @@ class Core:
self._utils = onionrutils.OnionrUtils(self) self._utils = onionrutils.OnionrUtils(self)
# Initialize the crypto object # Initialize the crypto object
self._crypto = onionrcrypto.OnionrCrypto(self) self._crypto = onionrcrypto.OnionrCrypto(self)
self._blacklist = onionrblacklist.OnionrBlackList(self)
except Exception as error: except Exception as error:
logger.error('Failed to initialize core Onionr library.', error=error) logger.error('Failed to initialize core Onionr library.', error=error)
@ -76,6 +82,12 @@ class Core:
sys.exit(1) sys.exit(1)
return return
def refreshFirstStartVars(self):
'''Hack to refresh some vars which may not be set on first start'''
if os.path.exists('data/hs/hostname'):
with open('data/hs/hostname', 'r') as hs:
self.hsAddress = hs.read().strip()
def addPeer(self, peerID, powID, name=''): def addPeer(self, peerID, powID, name=''):
''' '''
Adds a public key to the key database (misleading function name) Adds a public key to the key database (misleading function name)
@ -123,7 +135,6 @@ class Core:
for i in c.execute("SELECT * FROM adders where address = '" + address + "';"): for i in c.execute("SELECT * FROM adders where address = '" + address + "';"):
try: try:
if i[0] == address: if i[0] == address:
logger.warn('Not adding existing address')
conn.close() conn.close()
return False return False
except ValueError: except ValueError:
@ -156,14 +167,13 @@ class Core:
conn.close() conn.close()
events.event('address_remove', data = {'address': address}, onionr = None) events.event('address_remove', data = {'address': address}, onionr = None)
return True return True
else: else:
return False return False
def removeBlock(self, block): def removeBlock(self, block):
''' '''
remove a block from this node remove a block from this node (does not automatically blacklist)
''' '''
if self._utils.validateHash(block): if self._utils.validateHash(block):
conn = sqlite3.connect(self.blockDB) conn = sqlite3.connect(self.blockDB)
@ -180,85 +190,20 @@ class Core:
def createAddressDB(self): def createAddressDB(self):
''' '''
Generate the address database Generate the address database
types:
1: I2P b32 address
2: Tor v2 (like facebookcorewwwi.onion)
3: Tor v3
''' '''
conn = sqlite3.connect(self.addressDB) self.dbCreate.createAddressDB()
c = conn.cursor()
c.execute('''CREATE TABLE adders(
address text,
type int,
knownPeer text,
speed int,
success int,
DBHash text,
powValue text,
failure int,
lastConnect int
);
''')
conn.commit()
conn.close()
def createPeerDB(self): def createPeerDB(self):
''' '''
Generate the peer sqlite3 database and populate it with the peers table. Generate the peer sqlite3 database and populate it with the peers table.
''' '''
# generate the peer database self.dbCreate.createPeerDB()
conn = sqlite3.connect(self.peerDB)
c = conn.cursor()
c.execute('''CREATE TABLE peers(
ID text not null,
name text,
adders text,
blockDBHash text,
forwardKey text,
dateSeen not null,
bytesStored int,
trust int,
pubkeyExchanged int,
hashID text,
pow text not null);
''')
conn.commit()
conn.close()
return
def createBlockDB(self): def createBlockDB(self):
''' '''
Create a database for blocks Create a database for blocks
hash - the hash of a block
dateReceived - the date the block was recieved, not necessarily when it was created
decrypted - if we can successfully decrypt the block (does not describe its current state)
dataType - data type of the block
dataFound - if the data has been found for the block
dataSaved - if the data has been saved for the block
sig - optional signature by the author (not optional if author is specified)
author - multi-round partial sha3-256 hash of authors public key
''' '''
if os.path.exists(self.blockDB): self.dbCreate.createBlockDB()
raise Exception("Block database already exists")
conn = sqlite3.connect(self.blockDB)
c = conn.cursor()
c.execute('''CREATE TABLE hashes(
hash text not null,
dateReceived int,
decrypted int,
dataType text,
dataFound int,
dataSaved int,
sig text,
author text
);
''')
conn.commit()
conn.close()
return
def addToBlockDB(self, newHash, selfInsert=False, dataSaved=False): def addToBlockDB(self, newHash, selfInsert=False, dataSaved=False):
''' '''
@ -298,16 +243,24 @@ class Core:
return data return data
def setData(self, data): def _getSha3Hash(self, data):
'''
Set the data assciated with a hash
'''
data = data
hasher = hashlib.sha3_256() hasher = hashlib.sha3_256()
if not type(data) is bytes: if not type(data) is bytes:
data = data.encode() data = data.encode()
hasher.update(data) hasher.update(data)
dataHash = hasher.hexdigest() dataHash = hasher.hexdigest()
return dataHash
def setData(self, data):
'''
Set the data assciated with a hash
'''
data = data
if not type(data) is bytes:
data = data.encode()
dataHash = self._getSha3Hash(data)
if type(dataHash) is bytes: if type(dataHash) is bytes:
dataHash = dataHash.decode() dataHash = dataHash.decode()
blockFileName = self.blockDataLocation + dataHash + '.dat' blockFileName = self.blockDataLocation + dataHash + '.dat'
@ -569,33 +522,15 @@ class Core:
c = conn.cursor() c = conn.cursor()
command = (data, address) command = (data, address)
# TODO: validate key on whitelist # TODO: validate key on whitelist
if key not in ('address', 'type', 'knownPeer', 'speed', 'success', 'DBHash', 'failure', 'lastConnect'): if key not in ('address', 'type', 'knownPeer', 'speed', 'success', 'DBHash', 'failure', 'lastConnect', 'lastConnectAttempt'):
raise Exception("Got invalid database key when setting address info") raise Exception("Got invalid database key when setting address info")
else:
c.execute('UPDATE adders SET ' + key + ' = ? WHERE address=?', command) c.execute('UPDATE adders SET ' + key + ' = ? WHERE address=?', command)
conn.commit() conn.commit()
conn.close() conn.close()
return return
def handle_direct_connection(self, data): def getBlockList(self, unsaved = False): # TODO: Use unsaved??
'''
Handles direct messages
'''
try:
data = json.loads(data)
# TODO: Determine the sender, verify, etc
if ('callback' in data) and (data['callback'] is True):
# then this is a response to the message we sent earlier
self.daemonQueueAdd('checkCallbacks', json.dumps(data))
else:
# then we should handle it and respond accordingly
self.daemonQueueAdd('incomingDirectConnection', json.dumps(data))
except Exception as e:
logger.warn('Failed to handle incoming direct message: %s' % str(e))
return
def getBlockList(self, unsaved = False): # TODO: Use unsaved
''' '''
Get list of our blocks Get list of our blocks
''' '''
@ -604,7 +539,7 @@ class Core:
if unsaved: if unsaved:
execute = 'SELECT hash FROM hashes WHERE dataSaved != 1 ORDER BY RANDOM();' execute = 'SELECT hash FROM hashes WHERE dataSaved != 1 ORDER BY RANDOM();'
else: else:
execute = 'SELECT hash FROM hashes ORDER BY RANDOM();' execute = 'SELECT hash FROM hashes ORDER BY dateReceived DESC;'
rows = list() rows = list()
for row in c.execute(execute): for row in c.execute(execute):
for i in row: for i in row:
@ -626,12 +561,15 @@ class Core:
return None return None
def getBlocksByType(self, blockType): def getBlocksByType(self, blockType, orderDate=True):
''' '''
Returns a list of blocks by the type Returns a list of blocks by the type
''' '''
conn = sqlite3.connect(self.blockDB) conn = sqlite3.connect(self.blockDB)
c = conn.cursor() c = conn.cursor()
if orderDate:
execute = 'SELECT hash FROM hashes WHERE dataType=? ORDER BY dateReceived;'
else:
execute = 'SELECT hash FROM hashes WHERE dataType=?;' execute = 'SELECT hash FROM hashes WHERE dataType=?;'
args = (blockType,) args = (blockType,)
rows = list() rows = list()
@ -656,9 +594,19 @@ class Core:
def updateBlockInfo(self, hash, key, data): def updateBlockInfo(self, hash, key, data):
''' '''
sets info associated with a block sets info associated with a block
hash - the hash of a block
dateReceived - the date the block was recieved, not necessarily when it was created
decrypted - if we can successfully decrypt the block (does not describe its current state)
dataType - data type of the block
dataFound - if the data has been found for the block
dataSaved - if the data has been saved for the block
sig - optional signature by the author (not optional if author is specified)
author - multi-round partial sha3-256 hash of authors public key
dateClaimed - timestamp claimed inside the block, only as trustworthy as the block author is
''' '''
if key not in ('dateReceived', 'decrypted', 'dataType', 'dataFound', 'dataSaved', 'sig', 'author'): if key not in ('dateReceived', 'decrypted', 'dataType', 'dataFound', 'dataSaved', 'sig', 'author', 'dateClaimed'):
return False return False
conn = sqlite3.connect(self.blockDB) conn = sqlite3.connect(self.blockDB)
@ -669,27 +617,42 @@ class Core:
conn.close() conn.close()
return True return True
def insertBlock(self, data, header='txt', sign=False, encryptType='', symKey='', asymPeer='', meta = {}): def insertBlock(self, data, header='txt', sign=False, encryptType='', symKey='', asymPeer='', meta = None):
''' '''
Inserts a block into the network Inserts a block into the network
encryptType must be specified to encrypt a block encryptType must be specified to encrypt a block
''' '''
retData = False
# check nonce
dataNonce = self._utils.bytesToStr(self._crypto.sha3Hash(data))
try: try:
data.decode() with open(self.dataNonceFile, 'r') as nonces:
except AttributeError: if dataNonce in nonces:
data = data.encode() return retData
except FileNotFoundError:
pass
# record nonce
with open(self.dataNonceFile, 'a') as nonceFile:
nonceFile.write(dataNonce + '\n')
if meta is None:
meta = dict()
if type(data) is bytes:
data = data.decode()
data = str(data)
retData = '' retData = ''
signature = '' signature = ''
signer = '' signer = ''
metadata = {} metadata = {}
# metadata is full block metadata, meta is internal, user specified metadata
# only use header if not set in provided meta # only use header if not set in provided meta
try: if not header is None:
meta['type'] meta['type'] = header
except KeyError: meta['type'] = str(meta['type'])
meta['type'] = header # block type
jsonMeta = json.dumps(meta) jsonMeta = json.dumps(meta)
@ -698,10 +661,14 @@ class Core:
else: else:
raise onionrexceptions.InvalidMetadata('encryptType must be asym or sym, or blank') raise onionrexceptions.InvalidMetadata('encryptType must be asym or sym, or blank')
try:
data = data.encode()
except AttributeError:
pass
# sign before encrypt, as unauthenticated crypto should not be a problem here # sign before encrypt, as unauthenticated crypto should not be a problem here
if sign: if sign:
signature = self._crypto.edSign(jsonMeta + data, key=self._crypto.privKey, encodeResult=True) signature = self._crypto.edSign(jsonMeta.encode() + data, key=self._crypto.privKey, encodeResult=True)
signer = self._crypto.pubKeyHashID() signer = self._crypto.pubKey
if len(jsonMeta) > 1000: if len(jsonMeta) > 1000:
raise onionrexceptions.InvalidMetadata('meta in json encoded form must not exceed 1000 bytes') raise onionrexceptions.InvalidMetadata('meta in json encoded form must not exceed 1000 bytes')
@ -710,40 +677,36 @@ class Core:
if encryptType == 'sym': if encryptType == 'sym':
if len(symKey) < self.requirements.passwordLength: if len(symKey) < self.requirements.passwordLength:
raise onionrexceptions.SecurityError('Weak encryption key') raise onionrexceptions.SecurityError('Weak encryption key')
jsonMeta = self._crypto.symmetricEncrypt(jsonMeta, key=symKey, returnEncoded=True) jsonMeta = self._crypto.symmetricEncrypt(jsonMeta, key=symKey, returnEncoded=True).decode()
data = self._crypto.symmetricEncrypt(data, key=symKey, returnEncoded=True) data = self._crypto.symmetricEncrypt(data, key=symKey, returnEncoded=True).decode()
signature = self._crypto.symmetricEncrypt(signature, key=symKey, returnEncoded=True) signature = self._crypto.symmetricEncrypt(signature, key=symKey, returnEncoded=True).decode()
signer = self._crypto.symmetricEncrypt(signer, key=symKey, returnEncoded=True) signer = self._crypto.symmetricEncrypt(signer, key=symKey, returnEncoded=True).decode()
elif encryptType == 'asym': elif encryptType == 'asym':
if self._utils.validatePubKey(asymPeer): if self._utils.validatePubKey(asymPeer):
jsonMeta = self._crypto.pubKeyEncrypt(jsonMeta, asymPeer, encodedData=True) jsonMeta = self._crypto.pubKeyEncrypt(jsonMeta, asymPeer, encodedData=True, anonymous=True).decode()
data = self._crypto.pubKeyEncrypt(data, asymPeer, encodedData=True) data = self._crypto.pubKeyEncrypt(data, asymPeer, encodedData=True, anonymous=True).decode()
signature = self._crypto.pubKeyEncrypt(signature, asymPeer, encodedData=True) signature = self._crypto.pubKeyEncrypt(signature, asymPeer, encodedData=True, anonymous=True).decode()
signer = self._crypto.pubKeyEncrypt(signer, asymPeer, encodedData=True, anonymous=True).decode()
else: else:
raise onionrexceptions.InvalidPubkey(asymPeer + ' is not a valid base32 encoded ed25519 key') raise onionrexceptions.InvalidPubkey(asymPeer + ' is not a valid base32 encoded ed25519 key')
powProof = onionrproofs.POW(data)
# wait for proof to complete
powToken = powProof.waitForResult()
powToken = base64.b64encode(powToken[1])
try:
powToken = powToken.decode()
except AttributeError:
pass
# compile metadata # compile metadata
metadata['meta'] = jsonMeta metadata['meta'] = jsonMeta
metadata['sig'] = signature metadata['sig'] = signature
metadata['signer'] = signer metadata['signer'] = signer
metadata['powRandomToken'] = powToken
metadata['time'] = str(self._utils.getEpoch()) metadata['time'] = str(self._utils.getEpoch())
payload = json.dumps(metadata).encode() + b'\n' + data # send block data (and metadata) to POW module to get tokenized block data
proof = onionrproofs.POW(metadata, data)
payload = proof.waitForResult()
if payload != False:
retData = self.setData(payload) retData = self.setData(payload)
self.addToBlockDB(retData, selfInsert=True, dataSaved=True) self.addToBlockDB(retData, selfInsert=True, dataSaved=True)
self.setBlockType(retData, meta['type'])
self.daemonQueueAdd('uploadBlock', retData)
if retData != False:
events.event('insertBlock', onionr = None, threaded = False)
return retData return retData
def introduceNode(self): def introduceNode(self):

109
onionr/dbcreator.py Normal file
View File

@ -0,0 +1,109 @@
'''
Onionr - P2P Anonymous Data Storage & Sharing
DBCreator, creates sqlite3 databases used by Onionr
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import sqlite3, os
class DBCreator:
def __init__(self, coreInst):
self.core = coreInst
def createAddressDB(self):
'''
Generate the address database
types:
1: I2P b32 address
2: Tor v2 (like facebookcorewwwi.onion)
3: Tor v3
'''
conn = sqlite3.connect(self.core.addressDB)
c = conn.cursor()
c.execute('''CREATE TABLE adders(
address text,
type int,
knownPeer text,
speed int,
success int,
DBHash text,
powValue text,
failure int,
lastConnect int,
lastConnectAttempt int,
trust int
);
''')
conn.commit()
conn.close()
def createPeerDB(self):
'''
Generate the peer sqlite3 database and populate it with the peers table.
'''
# generate the peer database
conn = sqlite3.connect(self.core.peerDB)
c = conn.cursor()
c.execute('''CREATE TABLE peers(
ID text not null,
name text,
adders text,
blockDBHash text,
forwardKey text,
dateSeen not null,
bytesStored int,
trust int,
pubkeyExchanged int,
hashID text,
pow text not null);
''')
conn.commit()
conn.close()
return
def createBlockDB(self):
'''
Create a database for blocks
hash - the hash of a block
dateReceived - the date the block was recieved, not necessarily when it was created
decrypted - if we can successfully decrypt the block (does not describe its current state)
dataType - data type of the block
dataFound - if the data has been found for the block
dataSaved - if the data has been saved for the block
sig - optional signature by the author (not optional if author is specified)
author - multi-round partial sha3-256 hash of authors public key
dateClaimed - timestamp claimed inside the block, only as trustworthy as the block author is
'''
if os.path.exists(self.core.blockDB):
raise Exception("Block database already exists")
conn = sqlite3.connect(self.core.blockDB)
c = conn.cursor()
c.execute('''CREATE TABLE hashes(
hash text not null,
dateReceived int,
decrypted int,
dataType text,
dataFound int,
dataSaved int,
sig text,
author text,
dateClaimed int
);
''')
conn.commit()
conn.close()
return

View File

@ -123,18 +123,18 @@ def get_file():
return _outputfile return _outputfile
def raw(data): def raw(data, fd = sys.stdout):
''' '''
Outputs raw data to console without formatting Outputs raw data to console without formatting
''' '''
if get_settings() & OUTPUT_TO_CONSOLE: if get_settings() & OUTPUT_TO_CONSOLE:
print(data) ts = fd.write('%s\n' % data)
if get_settings() & OUTPUT_TO_FILE: if get_settings() & OUTPUT_TO_FILE:
with open(_outputfile, "a+") as f: with open(_outputfile, "a+") as f:
f.write(colors.filter(data) + '\n') f.write(colors.filter(data) + '\n')
def log(prefix, data, color = '', timestamp=True): def log(prefix, data, color = '', timestamp=True, fd = sys.stdout, prompt = True):
''' '''
Logs the data Logs the data
prefix : The prefix to the output prefix : The prefix to the output
@ -145,11 +145,11 @@ def log(prefix, data, color = '', timestamp=True):
if timestamp: if timestamp:
curTime = time.strftime("%m-%d %H:%M:%S") + ' ' curTime = time.strftime("%m-%d %H:%M:%S") + ' '
output = colors.reset + str(color) + '[' + colors.bold + str(prefix) + colors.reset + str(color) + '] ' + curTime + str(data) + colors.reset output = colors.reset + str(color) + ('[' + colors.bold + str(prefix) + colors.reset + str(color) + '] ' if prompt is True else '') + curTime + str(data) + colors.reset
if not get_settings() & USE_ANSI: if not get_settings() & USE_ANSI:
output = colors.filter(output) output = colors.filter(output)
raw(output) raw(output, fd = fd)
def readline(message = ''): def readline(message = ''):
''' '''
@ -201,31 +201,37 @@ def confirm(default = 'y', message = 'Are you sure %s? '):
return default == 'y' return default == 'y'
# debug: when there is info that could be useful for debugging purposes only # debug: when there is info that could be useful for debugging purposes only
def debug(data, timestamp=True): def debug(data, error = None, timestamp = True, prompt = True):
if get_level() <= LEVEL_DEBUG: if get_level() <= LEVEL_DEBUG:
log('/', data, timestamp=timestamp) log('/', data, timestamp=timestamp, prompt = prompt)
if not error is None:
debug('Error: ' + str(error) + parse_error())
# info: when there is something to notify the user of, such as the success of a process # info: when there is something to notify the user of, such as the success of a process
def info(data, timestamp=False): def info(data, timestamp = False, prompt = True):
if get_level() <= LEVEL_INFO: if get_level() <= LEVEL_INFO:
log('+', data, colors.fg.green, timestamp=timestamp) log('+', data, colors.fg.green, timestamp = timestamp, prompt = prompt)
# warn: when there is a potential for something bad to happen # warn: when there is a potential for something bad to happen
def warn(data, timestamp=True): def warn(data, error = None, timestamp = True, prompt = True):
if not error is None:
debug('Error: ' + str(error) + parse_error())
if get_level() <= LEVEL_WARN: if get_level() <= LEVEL_WARN:
log('!', data, colors.fg.orange, timestamp=timestamp) log('!', data, colors.fg.orange, timestamp = timestamp, prompt = prompt)
# error: when only one function, module, or process of the program encountered a problem and must stop # error: when only one function, module, or process of the program encountered a problem and must stop
def error(data, error=None, timestamp=True): def error(data, error = None, timestamp = True, prompt = True):
if get_level() <= LEVEL_ERROR: if get_level() <= LEVEL_ERROR:
log('-', data, colors.fg.red, timestamp=timestamp) log('-', data, colors.fg.red, timestamp = timestamp, fd = sys.stderr, prompt = prompt)
if not error is None: if not error is None:
debug('Error: ' + str(error) + parse_error()) debug('Error: ' + str(error) + parse_error())
# fatal: when the something so bad has happened that the program must stop # fatal: when the something so bad has happened that the program must stop
def fatal(data, timestamp=True): def fatal(data, error = None, timestamp=True, prompt = True):
if not error is None:
debug('Error: ' + str(error) + parse_error())
if get_level() <= LEVEL_FATAL: if get_level() <= LEVEL_FATAL:
log('#', data, colors.bg.red + colors.fg.green + colors.bold, timestamp=timestamp) log('#', data, colors.bg.red + colors.fg.green + colors.bold, timestamp=timestamp, fd = sys.stderr, prompt = prompt)
# returns a formatted error message # returns a formatted error message
def parse_error(): def parse_error():

View File

@ -18,7 +18,7 @@
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import subprocess, os, random, sys, logger, time, signal import subprocess, os, random, sys, logger, time, signal, config
from onionrblockapi import Block from onionrblockapi import Block
class NetController: class NetController:
@ -33,6 +33,7 @@ class NetController:
self.hsPort = hsPort self.hsPort = hsPort
self._torInstnace = '' self._torInstnace = ''
self.myID = '' self.myID = ''
config.reload()
''' '''
if os.path.exists(self.torConfigLocation): if os.path.exists(self.torConfigLocation):
torrc = open(self.torConfigLocation, 'r') torrc = open(self.torConfigLocation, 'r')
@ -47,11 +48,15 @@ class NetController:
''' '''
Generate a torrc file for our tor instance Generate a torrc file for our tor instance
''' '''
hsVer = '# v2 onions'
if config.get('tor.v3onions'):
hsVer = 'HiddenServiceVersion 3'
logger.info('Using v3 onions :)')
if os.path.exists(self.torConfigLocation): if os.path.exists(self.torConfigLocation):
os.remove(self.torConfigLocation) os.remove(self.torConfigLocation)
torrcData = '''SocksPort ''' + str(self.socksPort) + ''' torrcData = '''SocksPort ''' + str(self.socksPort) + '''
HiddenServiceDir data/hs/ HiddenServiceDir data/hs/
\n''' + hsVer + '''\n
HiddenServicePort 80 127.0.0.1:''' + str(self.hsPort) + ''' HiddenServicePort 80 127.0.0.1:''' + str(self.hsPort) + '''
DataDirectory data/tordata/ DataDirectory data/tordata/
''' '''
@ -97,10 +102,10 @@ DataDirectory data/tordata/
elif 'Opening Socks listener' in line.decode(): elif 'Opening Socks listener' in line.decode():
logger.debug(line.decode().replace('\n', '')) logger.debug(line.decode().replace('\n', ''))
else: else:
logger.fatal('Failed to start Tor. Try killing any other Tor processes owned by this user.') logger.fatal('Failed to start Tor. Maybe a stray instance of Tor used by Onionr is still running?')
return False return False
except KeyboardInterrupt: except KeyboardInterrupt:
logger.fatal("Got keyboard interrupt") logger.fatal("Got keyboard interrupt.")
return False return False
logger.debug('Finished starting Tor.', timestamp=True) logger.debug('Finished starting Tor.', timestamp=True)

View File

@ -25,7 +25,7 @@ import sys
if sys.version_info[0] == 2 or sys.version_info[1] < 5: if sys.version_info[0] == 2 or sys.version_info[1] < 5:
print('Error, Onionr requires Python 3.4+') print('Error, Onionr requires Python 3.4+')
sys.exit(1) sys.exit(1)
import os, base64, random, getpass, shutil, subprocess, requests, time, platform, datetime, re, json, getpass import os, base64, random, getpass, shutil, subprocess, requests, time, platform, datetime, re, json, getpass, sqlite3
from threading import Thread from threading import Thread
import api, core, config, logger, onionrplugins as plugins, onionrevents as events import api, core, config, logger, onionrplugins as plugins, onionrevents as events
import onionrutils import onionrutils
@ -40,9 +40,9 @@ except ImportError:
raise Exception("You need the PySocks module (for use with socks5 proxy to use Tor)") raise Exception("You need the PySocks module (for use with socks5 proxy to use Tor)")
ONIONR_TAGLINE = 'Anonymous P2P Platform - GPLv3 - https://Onionr.VoidNet.Tech' ONIONR_TAGLINE = 'Anonymous P2P Platform - GPLv3 - https://Onionr.VoidNet.Tech'
ONIONR_VERSION = '0.1.0' # for debugging and stuff ONIONR_VERSION = '0.2.0' # for debugging and stuff
ONIONR_VERSION_TUPLE = tuple(ONIONR_VERSION.split('.')) # (MAJOR, MINOR, VERSION) ONIONR_VERSION_TUPLE = tuple(ONIONR_VERSION.split('.')) # (MAJOR, MINOR, VERSION)
API_VERSION = '3' # increments of 1; only change when something fundemental about how the API works changes. This way other nodes knows how to communicate without learning too much information about you. API_VERSION = '4' # increments of 1; only change when something fundemental about how the API works changes. This way other nodes know how to communicate without learning too much information about you.
class Onionr: class Onionr:
def __init__(self): def __init__(self):
@ -50,7 +50,6 @@ class Onionr:
Main Onionr class. This is for the CLI program, and does not handle much of the logic. Main Onionr class. This is for the CLI program, and does not handle much of the logic.
In general, external programs and plugins should not use this class. In general, external programs and plugins should not use this class.
''' '''
try: try:
os.chdir(sys.path[0]) os.chdir(sys.path[0])
except FileNotFoundError: except FileNotFoundError:
@ -92,8 +91,6 @@ class Onionr:
self.onionrCore = core.Core() self.onionrCore = core.Core()
self.onionrUtils = OnionrUtils(self.onionrCore) self.onionrUtils = OnionrUtils(self.onionrCore)
self.userOS = platform.system()
# Handle commands # Handle commands
self.debug = False # Whole application debugging self.debug = False # Whole application debugging
@ -138,18 +135,18 @@ class Onionr:
self.onionrCore.createAddressDB() self.onionrCore.createAddressDB()
# Get configuration # Get configuration
if type(config.get('client.hmac')) is type(None):
config.set('client.hmac', base64.b16encode(os.urandom(32)).decode('utf-8'), savefile=True)
if type(config.get('client.port')) is type(None):
randomPort = 0
while randomPort < 1024:
randomPort = self.onionrCore._crypto.secrets.randbelow(65535)
config.set('client.port', randomPort, savefile=True)
if type(config.get('client.participate')) is type(None):
config.set('client.participate', True, savefile=True)
if type(config.get('client.api_version')) is type(None):
config.set('client.api_version', API_VERSION, savefile=True)
if not data_exists:
# Generate default config
# Hostname should only be set if different from 127.x.x.x. Important for DNS rebinding attack prevention.
if self.debug:
randomPort = 8080
else:
while True:
randomPort = random.randint(1024, 65535)
if self.onionrUtils.checkPort(randomPort):
break
config.set('client', {'participate': True, 'hmac': base64.b16encode(os.urandom(32)).decode('utf-8'), 'port': randomPort, 'api_version': API_VERSION}, True)
self.cmds = { self.cmds = {
'': self.showHelpSuggestion, '': self.showHelpSuggestion,
@ -181,21 +178,15 @@ class Onionr:
'listkeys': self.listKeys, 'listkeys': self.listKeys,
'list-keys': self.listKeys, 'list-keys': self.listKeys,
'addmsg': self.addMessage,
'addmessage': self.addMessage,
'add-msg': self.addMessage,
'add-message': self.addMessage,
'pm': self.sendEncrypt,
'getpms': self.getPMs,
'get-pms': self.getPMs,
'addpeer': self.addPeer, 'addpeer': self.addPeer,
'add-peer': self.addPeer, 'add-peer': self.addPeer,
'add-address': self.addAddress, 'add-address': self.addAddress,
'add-addr': self.addAddress, 'add-addr': self.addAddress,
'addaddr': self.addAddress, 'addaddr': self.addAddress,
'addaddress': self.addAddress, 'addaddress': self.addAddress,
'list-peers': self.listPeers,
'blacklist-block': self.banBlock,
'add-file': self.addFile, 'add-file': self.addFile,
'addfile': self.addFile, 'addfile': self.addFile,
@ -206,8 +197,20 @@ class Onionr:
'introduce': self.onionrCore.introduceNode, 'introduce': self.onionrCore.introduceNode,
'connect': self.addAddress, 'connect': self.addAddress,
'kex': self.doKEX,
'pex': self.doPEX,
'getpassword': self.getWebPassword 'ui' : self.openUI,
'gui' : self.openUI,
'getpassword': self.printWebPassword,
'get-password': self.printWebPassword,
'getpwd': self.printWebPassword,
'get-pwd': self.printWebPassword,
'getpass': self.printWebPassword,
'get-pass': self.printWebPassword,
'getpasswd': self.printWebPassword,
'get-passwd': self.printWebPassword
} }
self.cmdhelp = { self.cmdhelp = {
@ -217,19 +220,19 @@ class Onionr:
'start': 'Starts the Onionr daemon', 'start': 'Starts the Onionr daemon',
'stop': 'Stops the Onionr daemon', 'stop': 'Stops the Onionr daemon',
'stats': 'Displays node statistics', 'stats': 'Displays node statistics',
'getpassword': 'Displays the web password', 'get-password': 'Displays the web password',
'enable-plugin': 'Enables and starts a plugin', 'enable-plugin': 'Enables and starts a plugin',
'disable-plugin': 'Disables and stops a plugin', 'disable-plugin': 'Disables and stops a plugin',
'reload-plugin': 'Reloads a plugin', 'reload-plugin': 'Reloads a plugin',
'create-plugin': 'Creates directory structure for a plugin', 'create-plugin': 'Creates directory structure for a plugin',
'add-peer': 'Adds a peer to database', 'add-peer': 'Adds a peer to database',
'list-peers': 'Displays a list of peers', 'list-peers': 'Displays a list of peers',
'add-msg': 'Broadcasts a message to the Onionr network',
'pm': 'Adds a private message to block',
'get-pms': 'Shows private messages sent to you',
'add-file': 'Create an Onionr block from a file', 'add-file': 'Create an Onionr block from a file',
'import-blocks': 'import blocks from the disk (Onionr is transport-agnostic!)', 'import-blocks': 'import blocks from the disk (Onionr is transport-agnostic!)',
'listconn': 'list connected peers', 'listconn': 'list connected peers',
'kex': 'exchange keys with peers (done automatically)',
'pex': 'exchange addresses with peers (done automatically)',
'blacklist-block': 'deletes a block by hash and permanently removes it from your node',
'introduce': 'Introduce your node to the public Onionr network', 'introduce': 'Introduce your node to the public Onionr network',
} }
@ -258,12 +261,40 @@ class Onionr:
def getCommands(self): def getCommands(self):
return self.cmds return self.cmds
def banBlock(self):
try:
ban = sys.argv[2]
except IndexError:
ban = logger.readline('Enter a block hash:')
if self.onionrUtils.validateHash(ban):
if not self.onionrCore._blacklist.inBlacklist(ban):
try:
self.onionrCore._blacklist.addToDB(ban)
self.onionrCore.removeBlock(ban)
except Exception as error:
logger.error('Could not blacklist block', error=error)
else:
logger.info('Block blacklisted')
else:
logger.warn('That block is already blacklisted')
else:
logger.error('Invalid block hash')
return
def listConn(self): def listConn(self):
self.onionrCore.daemonQueueAdd('connectedPeers') self.onionrCore.daemonQueueAdd('connectedPeers')
def listPeers(self):
logger.info('Peer transport address list:')
for i in self.onionrCore.listAdders():
logger.info(i)
def getWebPassword(self): def getWebPassword(self):
return config.get('client.hmac') return config.get('client.hmac')
def printWebPassword(self):
print(self.getWebPassword())
def getHelp(self): def getHelp(self):
return self.cmdhelp return self.cmdhelp
@ -328,31 +359,15 @@ class Onionr:
return return
def sendEncrypt(self): def doKEX(self):
''' '''make communicator do kex'''
Create a private message and send it logger.info('Sending kex to command queue...')
''' self.onionrCore.daemonQueueAdd('kex')
invalidID = True
while invalidID:
try:
peer = logger.readline('Peer to send to: ')
except KeyboardInterrupt:
break
else:
if self.onionrUtils.validatePubKey(peer):
invalidID = False
else:
logger.error('Invalid peer ID')
else:
try:
message = logger.readline("Enter a message: ")
except KeyboardInterrupt:
pass
else:
logger.info("Sending message to: " + logger.colors.underline + peer)
self.onionrUtils.sendPM(peer, message)
def doPEX(self):
'''make communicator do pex'''
logger.info('Sending pex to command queue...')
self.onionrCore.daemonQueueAdd('pex')
def listKeys(self): def listKeys(self):
''' '''
@ -377,7 +392,7 @@ class Onionr:
return return
if not '-' in newPeer: if not '-' in newPeer:
logger.info('Since no POW token was supplied for that key, one is being generated') logger.info('Since no POW token was supplied for that key, one is being generated')
proof = onionrproofs.POW(newPeer) proof = onionrproofs.DataPOW(newPeer)
while True: while True:
result = proof.getResult() result = proof.getResult()
if result == False: if result == False:
@ -428,19 +443,12 @@ class Onionr:
#addedHash = Block(type = 'txt', content = messageToAdd).save() #addedHash = Block(type = 'txt', content = messageToAdd).save()
addedHash = self.onionrCore.insertBlock(messageToAdd) addedHash = self.onionrCore.insertBlock(messageToAdd)
if addedHash != None: if addedHash != None and addedHash != False and addedHash != "":
logger.info("Message inserted as as block %s" % addedHash) logger.info("Message inserted as as block %s" % addedHash)
else: else:
logger.error('Failed to insert block.', timestamp = False) logger.error('Failed to insert block.', timestamp = False)
return return
def getPMs(self):
'''
display PMs sent to us
'''
self.onionrUtils.loadPMs()
def enablePlugin(self): def enablePlugin(self):
''' '''
Enables and starts the given plugin Enables and starts the given plugin
@ -557,8 +565,18 @@ class Onionr:
''' '''
Starts the Onionr communication daemon Starts the Onionr communication daemon
''' '''
communicatorDaemon = './communicator.py' communicatorDaemon = './communicator2.py'
if not os.environ.get("WERKZEUG_RUN_MAIN") == "true":
apiThread = Thread(target=api.API, args=(self.debug,))
apiThread.start()
try:
time.sleep(3)
except KeyboardInterrupt:
logger.info('Got keyboard interrupt')
time.sleep(1)
self.onionrUtils.localCommand('shutdown')
else:
if apiThread.isAlive():
if self._developmentMode: if self._developmentMode:
logger.warn('DEVELOPMENT MODE ENABLED (THIS IS LESS SECURE!)', timestamp = False) logger.warn('DEVELOPMENT MODE ENABLED (THIS IS LESS SECURE!)', timestamp = False)
net = NetController(config.get('client.port', 59496)) net = NetController(config.get('client.port', 59496))
@ -568,18 +586,16 @@ class Onionr:
logger.info('Started .onion service: ' + logger.colors.underline + net.myID) logger.info('Started .onion service: ' + logger.colors.underline + net.myID)
logger.info('Our Public key: ' + self.onionrCore._crypto.pubKey) logger.info('Our Public key: ' + self.onionrCore._crypto.pubKey)
time.sleep(1) time.sleep(1)
try:
if config.get('general.newCommunicator', False):
communicatorDaemon = './communicator2.py'
logger.info('Using new communicator')
except NameError:
pass
#TODO make runable on windows #TODO make runable on windows
subprocess.Popen([communicatorDaemon, "run", str(net.socksPort)]) subprocess.Popen([communicatorDaemon, "run", str(net.socksPort)])
logger.debug('Started communicator') logger.debug('Started communicator')
events.event('daemon_start', onionr = self) events.event('daemon_start', onionr = self)
api.API(self.debug) try:
while True:
time.sleep(5)
except KeyboardInterrupt:
self.onionrCore.daemonQueueAdd('shutdown')
self.onionrUtils.localCommand('shutdown')
return return
def killDaemon(self): def killDaemon(self):
@ -592,10 +608,10 @@ class Onionr:
events.event('daemon_stop', onionr = self) events.event('daemon_stop', onionr = self)
net = NetController(config.get('client.port', 59496)) net = NetController(config.get('client.port', 59496))
try: try:
self.onionrUtils.localCommand('shutdown')
except requests.exceptions.ConnectionError:
pass
self.onionrCore.daemonQueueAdd('shutdown') self.onionrCore.daemonQueueAdd('shutdown')
except sqlite3.OperationalError:
pass
net.killTor() net.killTor()
except Exception as e: except Exception as e:
logger.error('Failed to shutdown daemon.', error = e, timestamp = False) logger.error('Failed to shutdown daemon.', error = e, timestamp = False)
@ -618,6 +634,7 @@ class Onionr:
'Public Key' : self.onionrCore._crypto.pubKey, 'Public Key' : self.onionrCore._crypto.pubKey,
'POW Token' : powToken, 'POW Token' : powToken,
'Combined' : self.onionrCore._crypto.pubKey + '-' + powToken, 'Combined' : self.onionrCore._crypto.pubKey + '-' + powToken,
'Human readable public key' : self.onionrCore._utils.getHumanReadableID(),
'Node Address' : self.get_hostname(), 'Node Address' : self.get_hostname(),
# file and folder size stats # file and folder size stats
@ -735,5 +752,12 @@ class Onionr:
else: else:
logger.error('%s add-file <filename>' % sys.argv[0], timestamp = False) logger.error('%s add-file <filename>' % sys.argv[0], timestamp = False)
def openUI(self):
import webbrowser
url = 'http://127.0.0.1:%s/ui/index.html?timingToken=%s' % (config.get('client.port', 59496), self.onionrUtils.getTimeBypassToken())
Onionr() print('Opening %s ...' % url)
webbrowser.open(url, new = 1, autoraise = True)
if __name__ == "__main__":
Onionr()

115
onionr/onionrblacklist.py Normal file
View File

@ -0,0 +1,115 @@
'''
Onionr - P2P Microblogging Platform & Social network.
This file handles maintenence of a blacklist database, for blocks and peers
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import sqlite3, os, logger
class OnionrBlackList:
def __init__(self, coreInst):
self.blacklistDB = 'data/blacklist.db'
self._core = coreInst
if not os.path.exists(self.blacklistDB):
self.generateDB()
return
def inBlacklist(self, data):
hashed = self._core._utils.bytesToStr(self._core._crypto.sha3Hash(data))
retData = False
if not hashed.isalnum():
raise Exception("Hashed data is not alpha numeric")
for i in self._dbExecute("select * from blacklist where hash='%s'" % (hashed,)):
retData = True # this only executes if an entry is present by that hash
break
return retData
def _dbExecute(self, toExec):
conn = sqlite3.connect(self.blacklistDB)
c = conn.cursor()
retData = c.execute(toExec)
conn.commit()
return retData
def deleteBeforeDate(self, date):
# TODO, delete blacklist entries before date
return
def deleteExpired(self, dataType=0):
'''Delete expired entries'''
deleteList = []
curTime = self._core._utils.getEpoch()
try:
int(dataType)
except AttributeError:
raise TypeError("dataType must be int")
for i in self._dbExecute('select * from blacklist where dataType=%s' % (dataType,)):
if i[1] == dataType:
if (curTime - i[2]) >= i[3]:
deleteList.append(i[0])
for thing in deleteList:
self._dbExecute("delete from blacklist where hash='%s'" % (thing,))
def generateDB(self):
self._dbExecute('''CREATE TABLE blacklist(
hash text primary key not null,
dataType int,
blacklistDate int,
expire int
);
''')
return
def clearDB(self):
self._dbExecute('''delete from blacklist;);''')
def getList(self):
data = self._dbExecute('select * from blacklist')
myList = []
for i in data:
myList.append(i[0])
return myList
def addToDB(self, data, dataType=0, expire=0):
'''Add to the blacklist. Intended to be block hash, block data, peers, or transport addresses
0=block
1=peer
2=pubkey
'''
# we hash the data so we can remove data entirely from our node's disk
hashed = self._core._utils.bytesToStr(self._core._crypto.sha3Hash(data))
if self.inBlacklist(hashed):
return
if not hashed.isalnum():
raise Exception("Hashed data is not alpha numeric")
try:
int(dataType)
except ValueError:
raise Exception("dataType is not int")
try:
int(expire)
except ValueError:
raise Exception("expire is not int")
#TODO check for length sanity
insert = (hashed,)
blacklistDate = self._core._utils.getEpoch()
self._dbExecute("insert into blacklist (hash, dataType, blacklistDate, expire) VALUES('%s', %s, %s, %s);" % (hashed, dataType, blacklistDate, expire))

View File

@ -18,7 +18,7 @@
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import core as onionrcore, logger, config import core as onionrcore, logger, config, onionrexceptions, nacl.exceptions
import json, os, sys, datetime, base64 import json, os, sys, datetime, base64
class Block: class Block:
@ -28,21 +28,17 @@ class Block:
def __init__(self, hash = None, core = None, type = None, content = None): def __init__(self, hash = None, core = None, type = None, content = None):
# take from arguments # take from arguments
# sometimes people input a bytes object instead of str in `hash` # sometimes people input a bytes object instead of str in `hash`
try: if (not hash is None) and isinstance(hash, bytes):
hash = hash.decode() hash = hash.decode()
except AttributeError:
pass
self.hash = hash self.hash = hash
self.core = core self.core = core
self.btype = type self.btype = type
self.bcontent = content self.bcontent = content
# initialize variables # initialize variables
self.valid = True self.valid = True
self.raw = None self.raw = None
self.powHash = None
self.powToken = None
self.signed = False self.signed = False
self.signature = None self.signature = None
self.signedData = None self.signedData = None
@ -50,6 +46,10 @@ class Block:
self.parent = None self.parent = None
self.bheader = {} self.bheader = {}
self.bmetadata = {} self.bmetadata = {}
self.isEncrypted = False
self.decrypted = False
self.signer = None
self.validSig = False
# handle arguments # handle arguments
if self.getCore() is None: if self.getCore() is None:
@ -57,13 +57,62 @@ class Block:
# update the blocks' contents if it exists # update the blocks' contents if it exists
if not self.getHash() is None: if not self.getHash() is None:
if not self.update(): if not self.core._utils.validateHash(self.hash):
logger.debug('Block hash %s is invalid.' % self.getHash())
raise onionrexceptions.InvalidHexHash('Block hash is invalid.')
elif not self.update():
logger.debug('Failed to open block %s.' % self.getHash()) logger.debug('Failed to open block %s.' % self.getHash())
else: else:
logger.debug('Did not update block') pass
#logger.debug('Did not update block.')
# logic # logic
def decrypt(self, anonymous = True, encodedData = True):
'''
Decrypt a block, loading decrypted data into their vars
'''
if self.decrypted:
return True
retData = False
core = self.getCore()
# decrypt data
if self.getHeader('encryptType') == 'asym':
try:
self.bcontent = core._crypto.pubKeyDecrypt(self.bcontent, anonymous=anonymous, encodedData=encodedData)
bmeta = core._crypto.pubKeyDecrypt(self.bmetadata, anonymous=anonymous, encodedData=encodedData)
try:
bmeta = bmeta.decode()
except AttributeError:
# yet another bytes fix
pass
self.bmetadata = json.loads(bmeta)
self.signature = core._crypto.pubKeyDecrypt(self.signature, anonymous=anonymous, encodedData=encodedData)
self.signer = core._crypto.pubKeyDecrypt(self.signer, anonymous=anonymous, encodedData=encodedData)
self.signedData = json.dumps(self.bmetadata) + self.bcontent.decode()
except nacl.exceptions.CryptoError:
pass
#logger.debug('Could not decrypt block. Either invalid key or corrupted data')
else:
retData = True
self.decrypted = True
else:
logger.warn('symmetric decryption is not yet supported by this API')
return retData
def verifySig(self):
'''
Verify if a block's signature is signed by its claimed signer
'''
core = self.getCore()
if core._crypto.edVerify(data=self.signedData, key=self.signer, sig=self.signature, encodedData=True):
self.validSig = True
else:
self.validSig = False
return self.validSig
def update(self, data = None, file = None): def update(self, data = None, file = None):
''' '''
Loads data from a block in to the current object. Loads data from a block in to the current object.
@ -114,14 +163,19 @@ class Block:
self.raw = str(blockdata) self.raw = str(blockdata)
self.bheader = json.loads(self.getRaw()[:self.getRaw().index('\n')]) self.bheader = json.loads(self.getRaw()[:self.getRaw().index('\n')])
self.bcontent = self.getRaw()[self.getRaw().index('\n') + 1:] self.bcontent = self.getRaw()[self.getRaw().index('\n') + 1:]
if self.bheader['encryptType'] in ('asym', 'sym'):
self.bmetadata = self.getHeader('meta', None)
self.isEncrypted = True
else:
self.bmetadata = json.loads(self.getHeader('meta', None)) self.bmetadata = json.loads(self.getHeader('meta', None))
self.parent = self.getMetadata('parent', None) self.parent = self.getMetadata('parent', None)
self.btype = self.getMetadata('type', None) self.btype = self.getMetadata('type', None)
self.powHash = self.getMetadata('powHash', None)
self.powToken = self.getMetadata('powToken', None)
self.signed = ('sig' in self.getHeader() and self.getHeader('sig') != '') self.signed = ('sig' in self.getHeader() and self.getHeader('sig') != '')
# TODO: detect if signer is hash of pubkey or not
self.signer = self.getHeader('signer', None)
self.signature = self.getHeader('sig', None) self.signature = self.getHeader('sig', None)
self.signedData = (None if not self.isSigned() else self.getHeader('meta') + '\n' + self.getContent()) # signed data is jsonMeta + block content (no linebreak)
self.signedData = (None if not self.isSigned() else self.getHeader('meta') + self.getContent())
self.date = self.getCore().getBlockDate(self.getHash()) self.date = self.getCore().getBlockDate(self.getHash())
if not self.getDate() is None: if not self.getDate() is None:
@ -174,11 +228,13 @@ class Block:
else: else:
self.hash = self.getCore().insertBlock(self.getContent(), header = self.getType(), sign = sign) self.hash = self.getCore().insertBlock(self.getContent(), header = self.getType(), sign = sign)
self.update() self.update()
return self.getHash() return self.getHash()
else: else:
logger.warn('Not writing block; it is invalid.') logger.warn('Not writing block; it is invalid.')
except Exception as e: except Exception as e:
logger.error('Failed to save block.', error = e, timestamp = False) logger.error('Failed to save block.', error = e, timestamp = False)
return False return False
# getters # getters
@ -210,7 +266,6 @@ class Block:
Outputs: Outputs:
- (str): the type of the block - (str): the type of the block
''' '''
return self.btype return self.btype
def getRaw(self): def getRaw(self):
@ -435,7 +490,7 @@ class Block:
# static functions # static functions
def getBlocks(type = None, signer = None, signed = None, reverse = False, core = None): def getBlocks(type = None, signer = None, signed = None, parent = None, reverse = False, limit = None, core = None):
''' '''
Returns a list of Block objects based on supplied filters Returns a list of Block objects based on supplied filters
@ -453,6 +508,9 @@ class Block:
try: try:
core = (core if not core is None else onionrcore.Core()) core = (core if not core is None else onionrcore.Core())
if (not parent is None) and (not isinstance(parent, Block)):
parent = Block(hash = parent, core = core)
relevant_blocks = list() relevant_blocks = list()
blocks = (core.getBlockList() if type is None else core.getBlocksByType(type)) blocks = (core.getBlockList() if type is None else core.getBlocksByType(type))
@ -467,6 +525,8 @@ class Block:
if not signer is None: if not signer is None:
if isinstance(signer, (str,)): if isinstance(signer, (str,)):
signer = [signer] signer = [signer]
if isinstance(signer, (bytes,)):
signer = [signer.decode()]
isSigner = False isSigner = False
for key in signer: for key in signer:
@ -477,14 +537,23 @@ class Block:
if not isSigner: if not isSigner:
relevant = False relevant = False
if relevant: if not parent is None:
blockParent = block.getParent()
if blockParent is None:
relevant = False
else:
relevant = parent.getHash() == blockParent.getHash()
if relevant and (limit is None or len(relevant_Blocks) <= int(limit)):
relevant_blocks.append(block) relevant_blocks.append(block)
if bool(reverse): if bool(reverse):
relevant_blocks.reverse() relevant_blocks.reverse()
return relevant_blocks return relevant_blocks
except Exception as e: except Exception as e:
logger.debug(('Failed to get blocks: %s' % str(e)) + logger.parse_error()) logger.debug('Failed to get blocks.', error = e)
return list() return list()
@ -496,7 +565,6 @@ class Block:
- child (str/Block): the child Block to be followed - child (str/Block): the child Block to be followed
- file (str/file): the file to write the content to, instead of returning it - file (str/file): the file to write the content to, instead of returning it
- maximumFollows (int): the maximum number of Blocks to follow - maximumFollows (int): the maximum number of Blocks to follow
''' '''
# validate data and instantiate Core # validate data and instantiate Core

View File

@ -59,7 +59,7 @@ class OnionrCrypto:
with open(self._keyFile, 'w') as keyfile: with open(self._keyFile, 'w') as keyfile:
keyfile.write(self.pubKey + ',' + self.privKey) keyfile.write(self.pubKey + ',' + self.privKey)
with open(self.keyPowFile, 'w') as keyPowFile: with open(self.keyPowFile, 'w') as keyPowFile:
proof = onionrproofs.POW(self.pubKey) proof = onionrproofs.DataPOW(self.pubKey)
logger.info('Doing necessary work to insert our public key') logger.info('Doing necessary work to insert our public key')
while True: while True:
time.sleep(0.2) time.sleep(0.2)
@ -114,6 +114,11 @@ class OnionrCrypto:
'''Encrypt to a public key (Curve25519, taken from base32 Ed25519 pubkey)''' '''Encrypt to a public key (Curve25519, taken from base32 Ed25519 pubkey)'''
retVal = '' retVal = ''
try:
pubkey = pubkey.encode()
except AttributeError:
pass
if encodedData: if encodedData:
encoding = nacl.encoding.Base64Encoder encoding = nacl.encoding.Base64Encoder
else: else:
@ -127,7 +132,11 @@ class OnionrCrypto:
elif anonymous: elif anonymous:
key = nacl.signing.VerifyKey(key=pubkey, encoder=nacl.encoding.Base32Encoder).to_curve25519_public_key() key = nacl.signing.VerifyKey(key=pubkey, encoder=nacl.encoding.Base32Encoder).to_curve25519_public_key()
anonBox = nacl.public.SealedBox(key) anonBox = nacl.public.SealedBox(key)
retVal = anonBox.encrypt(data.encode(), encoder=encoding) try:
data = data.encode()
except AttributeError:
pass
retVal = anonBox.encrypt(data, encoder=encoding)
return retVal return retVal
def pubKeyDecrypt(self, data, pubkey='', anonymous=False, encodedData=False): def pubKeyDecrypt(self, data, pubkey='', anonymous=False, encodedData=False):
@ -238,6 +247,10 @@ class OnionrCrypto:
return result return result
def sha3Hash(self, data): def sha3Hash(self, data):
try:
data = data.encode()
except AttributeError:
pass
hasher = hashlib.sha3_256() hasher = hashlib.sha3_256()
hasher.update(data) hasher.update(data)
return hasher.hexdigest() return hasher.hexdigest()
@ -249,22 +262,22 @@ class OnionrCrypto:
pass pass
return nacl.hash.blake2b(data) return nacl.hash.blake2b(data)
def verifyPow(self, blockContent, metadata): def verifyPow(self, blockContent):
''' '''
Verifies the proof of work associated with a block Verifies the proof of work associated with a block
''' '''
retData = False retData = False
if not 'powRandomToken' in metadata:
logger.warn('No powRandomToken')
return False
dataLen = len(blockContent) dataLen = len(blockContent)
expectedHash = self.blake2bHash(base64.b64decode(metadata['powRandomToken']) + self.blake2bHash(blockContent.encode()))
difficulty = 0
try: try:
expectedHash = expectedHash.decode() blockContent = blockContent.encode()
except AttributeError:
pass
blockHash = self.sha3Hash(blockContent)
try:
blockHash = blockHash.decode() # bytes on some versions for some reason
except AttributeError: except AttributeError:
pass pass
@ -273,7 +286,7 @@ class OnionrCrypto:
mainHash = '0000000000000000000000000000000000000000000000000000000000000000'#nacl.hash.blake2b(nacl.utils.random()).decode() mainHash = '0000000000000000000000000000000000000000000000000000000000000000'#nacl.hash.blake2b(nacl.utils.random()).decode()
puzzle = mainHash[:difficulty] puzzle = mainHash[:difficulty]
if metadata['powRandomToken'][:difficulty] == puzzle: if blockHash[:difficulty] == puzzle:
# logger.debug('Validated block pow') # logger.debug('Validated block pow')
retData = True retData = True
else: else:

View File

@ -0,0 +1,56 @@
'''
Onionr - P2P Microblogging Platform & Social network.
Contains the CommunicatorUtils class which contains useful functions for the communicator daemon
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
import onionrexceptions, onionrpeers, onionrproofs, base64, logger
class DaemonTools:
def __init__(self, daemon):
self.daemon = daemon
self.announceCache = {}
def announceNode(self):
'''Announce our node to our peers'''
retData = False
# Announce to random online peers
for i in self.daemon.onlinePeers:
if not i in self.announceCache:
peer = i
break
else:
peer = self.daemon.pickOnlinePeer()
ourID = self.daemon._core.hsAddress.strip()
url = 'http://' + peer + '/public/announce/'
data = {'node': ourID}
combinedNodes = ourID + peer
if peer in self.announceCache:
data['random'] = self.announceCache[peer]
else:
proof = onionrproofs.DataPOW(combinedNodes, forceDifficulty=4)
data['random'] = base64.b64encode(proof.waitForResult()[1])
self.announceCache[peer] = data['random']
logger.info('Announcing node to ' + url)
if self.daemon._core._utils.doPostRequest(url, data) == 'Success':
retData = True
self.daemon.decrementThreadCount('announceNode')
return retData

View File

@ -33,10 +33,10 @@ def __event_caller(event_name, data = {}, onionr = None):
try: try:
call(plugins.get_plugin(plugin), event_name, data, get_pluginapi(onionr, data)) call(plugins.get_plugin(plugin), event_name, data, get_pluginapi(onionr, data))
except ModuleNotFoundError as e: except ModuleNotFoundError as e:
logger.warn('Disabling nonexistant plugin \"' + plugin + '\"...') logger.warn('Disabling nonexistant plugin "%s"...' % plugin)
plugins.disable(plugin, onionr, stop_event = False) plugins.disable(plugin, onionr, stop_event = False)
except Exception as e: except Exception as e:
logger.warn('Event \"' + event_name + '\" failed for plugin \"' + plugin + '\".') logger.warn('Event "%s" failed for plugin "%s".' % (event_name, plugin))
logger.debug(str(e)) logger.debug(str(e))

View File

@ -26,6 +26,10 @@ class Unknown(Exception):
class Invalid(Exception): class Invalid(Exception):
pass pass
# communicator exceptions
class OnlinePeerNeeded(Exception):
pass
# crypto exceptions # crypto exceptions
class InvalidPubkey(Exception): class InvalidPubkey(Exception):
pass pass
@ -34,8 +38,23 @@ class InvalidPubkey(Exception):
class InvalidMetadata(Exception): class InvalidMetadata(Exception):
pass pass
class BlacklistedBlock(Exception):
pass
class DataExists(Exception):
pass
class InvalidHexHash(Exception):
'''When a string is not a valid hex string of appropriate length for a hash value'''
pass
class InvalidProof(Exception):
'''When a proof is invalid or inadequate'''
pass
# network level exceptions # network level exceptions
class MissingPort(Exception): class MissingPort(Exception):
pass pass
class InvalidAddress(Exception): class InvalidAddress(Exception):
pass pass

View File

@ -1,7 +1,7 @@
''' '''
Onionr - P2P Microblogging Platform & Social network. Onionr - P2P Microblogging Platform & Social network.
This file contains both the OnionrCommunicate class for communcating with peers This file contains both the PeerProfiles class for network profiling of Onionr nodes
''' '''
''' '''
This program is free software: you can redistribute it and/or modify This program is free software: you can redistribute it and/or modify
@ -17,3 +17,83 @@
You should have received a copy of the GNU General Public License You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import core, config, logger, sqlite3
class PeerProfiles:
'''
PeerProfiles
'''
def __init__(self, address, coreInst):
self.address = address # node address
self.score = None
self.friendSigCount = 0
self.success = 0
self.failure = 0
if not isinstance(coreInst, core.Core):
raise TypeError("coreInst must be a type of core.Core")
self.coreInst = coreInst
assert isinstance(self.coreInst, core.Core)
self.loadScore()
return
def loadScore(self):
'''Load the node's score from the database'''
try:
self.success = int(self.coreInst.getAddressInfo(self.address, 'success'))
except (TypeError, ValueError) as e:
self.success = 0
self.score = self.success
def saveScore(self):
'''Save the node's score to the database'''
self.coreInst.setAddressInfo(self.address, 'success', self.score)
return
def addScore(self, toAdd):
'''Add to the peer's score (can add negative)'''
self.score += toAdd
self.saveScore()
def getScoreSortedPeerList(coreInst):
if not type(coreInst is core.Core):
raise TypeError('coreInst must be instance of core.Core')
peerList = coreInst.listAdders()
peerScores = {}
for address in peerList:
# Load peer's profiles into a list
profile = PeerProfiles(address, coreInst)
peerScores[address] = profile.score
# Sort peers by their score, greatest to least
peerList = sorted(peerScores, key=peerScores.get, reverse=True)
return peerList
def peerCleanup(coreInst):
'''Removes peers who have been offline too long or score too low'''
if not type(coreInst is core.Core):
raise TypeError('coreInst must be instance of core.Core')
logger.info('Cleaning peers...')
config.reload()
minScore = int(config.get('peers.minimumScore'))
maxPeers = int(config.get('peers.maxStoredPeers'))
adders = getScoreSortedPeerList(coreInst)
adders.reverse()
for address in adders:
# Remove peers that go below the negative score
if PeerProfiles(address, coreInst).score < minScore:
coreInst.removeAddress(address)
try:
coreInst._blacklist.addToDB(address, dataType=1, expire=300)
except sqlite3.IntegrityError: #TODO just make sure its not a unique constraint issue
pass
logger.warn('Removed address ' + address + '.')
# Unban probably not malicious peers TODO improve
coreInst._blacklist.deleteExpired(dataType=1)

View File

@ -130,6 +130,22 @@ class CommandAPI:
def get_commands(self): def get_commands(self):
return self.pluginapi.get_onionr().getCommands() return self.pluginapi.get_onionr().getCommands()
class WebAPI:
def __init__(self, pluginapi):
self.pluginapi = pluginapi
def register_callback(self, action, callback, scope = 'public'):
return self.pluginapi.get_onionr().api.setCallback(action, callback, scope = scope)
def unregister_callback(self, action, scope = 'public'):
return self.pluginapi.get_onionr().api.removeCallback(action, scope = scope)
def get_callback(self, action, scope = 'public'):
return self.pluginapi.get_onionr().api.getCallback(action, scope= scope)
def get_callbacks(self, scope = None):
return self.pluginapi.get_onionr().api.getCallbacks(scope = scope)
class pluginapi: class pluginapi:
def __init__(self, onionr, data): def __init__(self, onionr, data):
self.onionr = onionr self.onionr = onionr
@ -142,6 +158,7 @@ class pluginapi:
self.daemon = DaemonAPI(self) self.daemon = DaemonAPI(self)
self.plugins = PluginAPI(self) self.plugins = PluginAPI(self)
self.commands = CommandAPI(self) self.commands = CommandAPI(self)
self.web = WebAPI(self)
def get_onionr(self): def get_onionr(self):
return self.onionr return self.onionr
@ -167,5 +184,8 @@ class pluginapi:
def get_commandapi(self): def get_commandapi(self):
return self.commands return self.commands
def get_webapi(self):
return self.web
def is_development_mode(self): def is_development_mode(self):
return self.get_onionr()._developmentMode return self.get_onionr()._developmentMode

View File

@ -63,14 +63,16 @@ def enable(name, onionr = None, start_event = True):
if exists(name): if exists(name):
enabled_plugins = get_enabled_plugins() enabled_plugins = get_enabled_plugins()
if not name in enabled_plugins: if not name in enabled_plugins:
try:
events.call(get_plugin(name), 'enable', onionr)
except ImportError: # Was getting import error on Gitlab CI test "data"
return False
else:
enabled_plugins.append(name) enabled_plugins.append(name)
config.set('plugins.enabled', enabled_plugins, True) config.set('plugins.enabled', enabled_plugins, True)
events.call(get_plugin(name), 'enable', onionr)
if start_event is True: if start_event is True:
start(name) start(name)
return True return True
else: else:
return False return False

View File

@ -18,20 +18,23 @@
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import nacl.encoding, nacl.hash, nacl.utils, time, math, threading, binascii, logger, sys, base64 import nacl.encoding, nacl.hash, nacl.utils, time, math, threading, binascii, logger, sys, base64, json
import core import core
class POW: class DataPOW:
def __init__(self, data, threadCount = 5): def __init__(self, data, forceDifficulty=0, threadCount = 5):
self.foundHash = False self.foundHash = False
self.difficulty = 0 self.difficulty = 0
self.data = data self.data = data
self.threadCount = threadCount self.threadCount = threadCount
if forceDifficulty == 0:
dataLen = sys.getsizeof(data) dataLen = sys.getsizeof(data)
self.difficulty = math.floor(dataLen / 1000000) self.difficulty = math.floor(dataLen / 1000000)
if self.difficulty <= 2: if self.difficulty <= 2:
self.difficulty = 4 self.difficulty = 4
else:
self.difficulty = forceDifficulty
try: try:
self.data = self.data.encode() self.data = self.data.encode()
@ -113,3 +116,102 @@ class POW:
self.shutdown() self.shutdown()
logger.warn('Got keyboard interrupt while waiting for POW result, stopping') logger.warn('Got keyboard interrupt while waiting for POW result, stopping')
return result return result
class POW:
def __init__(self, metadata, data, threadCount = 5):
self.foundHash = False
self.difficulty = 0
self.data = data
self.metadata = metadata
self.threadCount = threadCount
dataLen = len(data) + len(json.dumps(metadata))
self.difficulty = math.floor(dataLen / 1000000)
if self.difficulty <= 2:
self.difficulty = 4
try:
self.data = self.data.encode()
except AttributeError:
pass
logger.info('Computing POW (difficulty: %s)...' % self.difficulty)
self.mainHash = '0' * 64
self.puzzle = self.mainHash[0:min(self.difficulty, len(self.mainHash))]
myCore = core.Core()
for i in range(max(1, threadCount)):
t = threading.Thread(name = 'thread%s' % i, target = self.pow, args = (True,myCore))
t.start()
return
def pow(self, reporting = False, myCore = None):
startTime = math.floor(time.time())
self.hashing = True
self.reporting = reporting
iFound = False # if current thread is the one that found the answer
answer = ''
heartbeat = 200000
hbCount = 0
while self.hashing:
rand = nacl.utils.random()
#token = nacl.hash.blake2b(rand + self.data).decode()
self.metadata['powRandomToken'] = base64.b64encode(rand).decode()
payload = json.dumps(self.metadata).encode() + b'\n' + self.data
token = myCore._crypto.sha3Hash(payload)
try:
# on some versions, token is bytes
token = token.decode()
except AttributeError:
pass
if self.puzzle == token[0:self.difficulty]:
self.hashing = False
iFound = True
self.result = payload
break
if iFound:
endTime = math.floor(time.time())
if self.reporting:
logger.debug('Found token after %s seconds: %s' % (endTime - startTime, token), timestamp=True)
logger.debug('Random value was: %s' % base64.b64encode(rand).decode())
def shutdown(self):
self.hashing = False
self.puzzle = ''
def changeDifficulty(self, newDiff):
self.difficulty = newDiff
def getResult(self):
'''
Returns the result then sets to false, useful to automatically clear the result
'''
try:
retVal = self.result
except AttributeError:
retVal = False
self.result = False
return retVal
def waitForResult(self):
'''
Returns the result only when it has been found, False if not running and not found
'''
result = False
try:
while True:
result = self.getResult()
if not self.hashing:
break
else:
time.sleep(2)
except KeyboardInterrupt:
self.shutdown()
logger.warn('Got keyboard interrupt while waiting for POW result, stopping')
return result

View File

@ -18,12 +18,12 @@
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
# Misc functions that do not fit in the main api, but are useful # Misc functions that do not fit in the main api, but are useful
import getpass, sys, requests, os, socket, hashlib, logger, sqlite3, config, binascii, time, base64, json, glob, shutil, math, json import getpass, sys, requests, os, socket, hashlib, logger, sqlite3, config, binascii, time, base64, json, glob, shutil, math, json, re
import nacl.signing, nacl.encoding import nacl.signing, nacl.encoding
from onionrblockapi import Block from onionrblockapi import Block
import onionrexceptions import onionrexceptions
from defusedxml import minidom from defusedxml import minidom
import pgpwords
if sys.version_info < (3, 6): if sys.version_info < (3, 6):
try: try:
import sha3 import sha3
@ -33,7 +33,7 @@ if sys.version_info < (3, 6):
class OnionrUtils: class OnionrUtils:
''' '''
Various useful function Various useful functions for validating things, etc functions, connectivity
''' '''
def __init__(self, coreInstance): def __init__(self, coreInstance):
self.fingerprintFile = 'data/own-fingerprint.txt' self.fingerprintFile = 'data/own-fingerprint.txt'
@ -41,6 +41,9 @@ class OnionrUtils:
self.timingToken = '' self.timingToken = ''
self.avoidDupe = [] # list used to prevent duplicate requests per peer for certain actions
self.peerProcessing = {} # dict of current peer actions: peer, actionList
config.reload()
return return
def getTimeBypassToken(self): def getTimeBypassToken(self):
@ -49,43 +52,16 @@ class OnionrUtils:
with open('data/time-bypass.txt', 'r') as bypass: with open('data/time-bypass.txt', 'r') as bypass:
self.timingToken = bypass.read() self.timingToken = bypass.read()
except Exception as error: except Exception as error:
logger.error('Failed to fetch time bypass token.', error=error) logger.error('Failed to fetch time bypass token.', error = error)
def sendPM(self, pubkey, message): return self.timingToken
def getRoundedEpoch(self, roundS=60):
''' '''
High level function to encrypt a message to a peer and insert it as a block Returns the epoch, rounded down to given seconds (Default 60)
'''
try:
# We sign PMs here rather than in core.insertBlock in order to mask the sender's pubkey
payload = {'sig': '', 'msg': '', 'id': self._core._crypto.pubKey}
sign = self._core._crypto.edSign(message, self._core._crypto.privKey, encodeResult=True)
#encrypted = self._core._crypto.pubKeyEncrypt(message, pubkey, anonymous=True, encodedData=True).decode()
payload['sig'] = sign
payload['msg'] = message
payload = json.dumps(payload)
message = payload
encrypted = self._core._crypto.pubKeyEncrypt(message, pubkey, anonymous=True, encodedData=True).decode()
block = self._core.insertBlock(encrypted, header='pm', sign=False)
if block == '':
logger.error('Could not send PM')
else:
logger.info('Sent PM, hash: %s' % block)
except Exception as error:
logger.error('Failed to send PM.', error=error)
return
def getCurrentHourEpoch(self):
'''
Returns the current epoch, rounded down to the hour
''' '''
epoch = self.getEpoch() epoch = self.getEpoch()
return epoch - (epoch % 3600) return epoch - (epoch % roundS)
def incrementAddressSuccess(self, address): def incrementAddressSuccess(self, address):
''' '''
@ -134,7 +110,8 @@ class OnionrUtils:
else: else:
logger.warn("Failed to add key") logger.warn("Failed to add key")
else: else:
logger.warn('%s pow failed' % key[0]) pass
#logger.debug('%s pow failed' % key[0])
return retVal return retVal
except Exception as error: except Exception as error:
logger.error('Failed to merge keys.', error=error) logger.error('Failed to merge keys.', error=error)
@ -149,12 +126,16 @@ class OnionrUtils:
retVal = False retVal = False
if newAdderList != False: if newAdderList != False:
for adder in newAdderList.split(','): for adder in newAdderList.split(','):
if not adder in self._core.listAdders(randomOrder = False) and adder.strip() != self.getMyAddress(): adder = adder.strip()
if not adder in self._core.listAdders(randomOrder = False) and adder != self.getMyAddress() and not self._core._blacklist.inBlacklist(adder):
if not config.get('tor.v3onions') and len(adder) == 62:
continue
if self._core.addAddress(adder): if self._core.addAddress(adder):
logger.info('Added %s to db.' % adder, timestamp = True) logger.info('Added %s to db.' % adder, timestamp = True)
retVal = True retVal = True
else: else:
logger.debug('%s is either our address or already in our DB' % adder) pass
#logger.debug('%s is either our address or already in our DB' % adder)
return retVal return retVal
except Exception as error: except Exception as error:
logger.error('Failed to merge adders.', error = error) logger.error('Failed to merge adders.', error = error)
@ -176,14 +157,17 @@ class OnionrUtils:
config.reload() config.reload()
self.getTimeBypassToken() self.getTimeBypassToken()
# TODO: URL encode parameters, just as an extra measure. May not be needed, but should be added regardless. # TODO: URL encode parameters, just as an extra measure. May not be needed, but should be added regardless.
try:
with open('data/host.txt', 'r') as host: with open('data/host.txt', 'r') as host:
hostname = host.read() hostname = host.read()
except FileNotFoundError:
return False
payload = 'http://%s:%s/client/?action=%s&token=%s&timingToken=%s' % (hostname, config.get('client.port'), command, config.get('client.hmac'), self.timingToken) payload = 'http://%s:%s/client/?action=%s&token=%s&timingToken=%s' % (hostname, config.get('client.port'), command, config.get('client.hmac'), self.timingToken)
try: try:
retData = requests.get(payload).text retData = requests.get(payload).text
except Exception as error: except Exception as error:
if not silent: if not silent:
logger.error('Failed to make local request (command: %s).' % command, error=error) logger.error('Failed to make local request (command: %s):%s' % (command, error))
retData = False retData = False
return retData return retData
@ -209,20 +193,39 @@ class OnionrUtils:
return pass1 return pass1
def getHumanReadableID(self, pub=''):
'''gets a human readable ID from a public key'''
if pub == '':
pub = self._core._crypto.pubKey
pub = base64.b16encode(base64.b32decode(pub)).decode()
return '-'.join(pgpwords.wordify(pub))
def getBlockMetadataFromData(self, blockData): def getBlockMetadataFromData(self, blockData):
''' '''
accepts block contents as string and returns a tuple of metadata, meta (meta being internal metadata) accepts block contents as string, returns a tuple of metadata, meta (meta being internal metadata, which will be returned as an encrypted base64 string if it is encrypted, dict if not).
''' '''
meta = {}
metadata = {}
data = blockData
try: try:
blockData = blockData.encode() blockData = blockData.encode()
except AttributeError: except AttributeError:
pass pass
try:
metadata = json.loads(blockData[:blockData.find(b'\n')].decode()) metadata = json.loads(blockData[:blockData.find(b'\n')].decode())
except json.decoder.JSONDecodeError:
pass
else:
data = blockData[blockData.find(b'\n'):].decode() data = blockData[blockData.find(b'\n'):].decode()
if not metadata['encryptType'] in ('asym', 'sym'):
try: try:
meta = json.loads(metadata['meta']) meta = json.loads(metadata['meta'])
except KeyError: except KeyError:
meta = {} pass
meta = metadata['meta']
return (metadata, meta, data) return (metadata, meta, data)
def checkPort(self, port, host=''): def checkPort(self, port, host=''):
@ -253,6 +256,29 @@ class OnionrUtils:
else: else:
return True return True
def processBlockMetadata(self, blockHash):
'''
Read metadata from a block and cache it to the block database
'''
myBlock = Block(blockHash, self._core)
if myBlock.isEncrypted:
myBlock.decrypt()
blockType = myBlock.getMetadata('type') # we would use myBlock.getType() here, but it is bugged with encrypted blocks
try:
if len(blockType) <= 10:
self._core.updateBlockInfo(blockHash, 'dataType', blockType)
except TypeError:
pass
def escapeAnsi(self, line):
'''
Remove ANSI escape codes from a string with regex
taken or adapted from: https://stackoverflow.com/a/38662876
'''
ansi_escape = re.compile(r'(\x9B|\x1B\[)[0-?]*[ -/]*[@-~]')
return ansi_escape.sub('', line)
def getBlockDBHash(self): def getBlockDBHash(self):
''' '''
Return a sha3_256 hash of the blocks DB Return a sha3_256 hash of the blocks DB
@ -310,7 +336,7 @@ class OnionrUtils:
return retVal return retVal
def validateMetadata(metadata): def validateMetadata(self, metadata, blockData):
'''Validate metadata meets onionr spec (does not validate proof value computation), take in either dictionary or json string''' '''Validate metadata meets onionr spec (does not validate proof value computation), take in either dictionary or json string'''
# TODO, make this check sane sizes # TODO, make this check sane sizes
retData = False retData = False
@ -334,9 +360,30 @@ class OnionrUtils:
if self._core.requirements.blockMetadataLengths[i] < len(metadata[i]): if self._core.requirements.blockMetadataLengths[i] < len(metadata[i]):
logger.warn('Block metadata key ' + i + ' exceeded maximum size') logger.warn('Block metadata key ' + i + ' exceeded maximum size')
break break
if i == 'time':
if not self.isIntegerString(metadata[i]):
logger.warn('Block metadata time stamp is not integer string')
break
else: else:
# if metadata loop gets no errors, it does not break, therefore metadata is valid # if metadata loop gets no errors, it does not break, therefore metadata is valid
# make sure we do not have another block with the same data content (prevent data duplication and replay attacks)
nonce = self._core._utils.bytesToStr(self._core._crypto.sha3Hash(blockData))
try:
with open(self._core.dataNonceFile, 'r') as nonceFile:
if nonce in nonceFile.read():
retData = False # we've seen that nonce before, so we can't pass metadata
raise onionrexceptions.DataExists
except FileNotFoundError:
retData = True retData = True
except onionrexceptions.DataExists:
# do not set retData to True, because nonce has been seen before
pass
else:
retData = True
if retData:
# Executes if data not seen
with open(self._core.dataNonceFile, 'a') as nonceFile:
nonceFile.write(nonce + '\n')
else: else:
logger.warn('In call to utils.validateMetadata, metadata must be JSON string or a dictionary object') logger.warn('In call to utils.validateMetadata, metadata must be JSON string or a dictionary object')
@ -357,6 +404,14 @@ class OnionrUtils:
retVal = True retVal = True
return retVal return retVal
def isIntegerString(self, data):
'''Check if a string is a valid base10 integer'''
try:
int(data)
except ValueError:
return False
else:
return True
def validateID(self, id): def validateID(self, id):
''' '''
@ -405,52 +460,6 @@ class OnionrUtils:
except: except:
return False return False
def loadPMs(self):
'''
Find, decrypt, and return array of PMs (array of dictionary, {from, text})
'''
blocks = Block.getBlocks(type = 'pm', core = self._core)
message = ''
sender = ''
for i in blocks:
try:
blockContent = i.getContent()
try:
message = self._core._crypto.pubKeyDecrypt(blockContent, encodedData=True, anonymous=True)
except nacl.exceptions.CryptoError as e:
pass
else:
try:
message = message.decode()
except AttributeError:
pass
try:
message = json.loads(message)
except json.decoder.JSONDecodeError:
pass
else:
logger.debug('Decrypted %s:' % i.getHash())
logger.info(message["msg"])
signer = message["id"]
sig = message["sig"]
if self.validatePubKey(signer):
if self._core._crypto.edVerify(message["msg"], signer, sig, encodedData=True):
logger.info("Good signature by %s" % signer)
else:
logger.warn("Bad signature by %s" % signer)
else:
logger.warn('Bad sender id: %s' % signer)
except FileNotFoundError:
pass
except Exception as error:
logger.error('Failed to open block %s.' % i, error=error)
return
def getPeerByHashId(self, hash): def getPeerByHashId(self, hash):
''' '''
Return the pubkey of the user if known from the hash Return the pubkey of the user if known from the hash
@ -536,29 +545,58 @@ class OnionrUtils:
'''returns epoch''' '''returns epoch'''
return math.floor(time.time()) return math.floor(time.time())
def doGetRequest(self, url, port=0, proxyType='tor'): def doPostRequest(self, url, data={}, port=0, proxyType='tor'):
''' '''
Do a get request through a local tor or i2p instance Do a POST request through a local tor or i2p instance
''' '''
if proxyType == 'tor': if proxyType == 'tor':
if port == 0: if port == 0:
raise onionrexceptions.MissingPort('Socks port required for Tor HTTP get request') port = self._core.torPort
proxies = {'http': 'socks5://127.0.0.1:' + str(port), 'https': 'socks5://127.0.0.1:' + str(port)} proxies = {'http': 'socks4a://127.0.0.1:' + str(port), 'https': 'socks4a://127.0.0.1:' + str(port)}
elif proxyType == 'i2p': elif proxyType == 'i2p':
proxies = {'http': 'http://127.0.0.1:4444'} proxies = {'http': 'http://127.0.0.1:4444'}
else: else:
return return
headers = {'user-agent': 'PyOnionr'} headers = {'user-agent': 'PyOnionr'}
try: try:
proxies = {'http': 'socks5h://127.0.0.1:' + str(port), 'https': 'socks5h://127.0.0.1:' + str(port)} proxies = {'http': 'socks4a://127.0.0.1:' + str(port), 'https': 'socks4a://127.0.0.1:' + str(port)}
r = requests.get(url, headers=headers, proxies=proxies, allow_redirects=False, timeout=(15, 30)) r = requests.post(url, data=data, headers=headers, proxies=proxies, allow_redirects=False, timeout=(15, 30))
retData = r.text retData = r.text
except KeyboardInterrupt:
raise KeyboardInterrupt
except requests.exceptions.RequestException as e: except requests.exceptions.RequestException as e:
logger.debug('Error: %s' % str(e)) logger.debug('Error: %s' % str(e))
retData = False retData = False
return retData return retData
def getNistBeaconSalt(self, torPort=0): def doGetRequest(self, url, port=0, proxyType='tor'):
'''
Do a get request through a local tor or i2p instance
'''
retData = False
if proxyType == 'tor':
if port == 0:
raise onionrexceptions.MissingPort('Socks port required for Tor HTTP get request')
proxies = {'http': 'socks4a://127.0.0.1:' + str(port), 'https': 'socks4a://127.0.0.1:' + str(port)}
elif proxyType == 'i2p':
proxies = {'http': 'http://127.0.0.1:4444'}
else:
return
headers = {'user-agent': 'PyOnionr'}
try:
proxies = {'http': 'socks4a://127.0.0.1:' + str(port), 'https': 'socks4a://127.0.0.1:' + str(port)}
r = requests.get(url, headers=headers, proxies=proxies, allow_redirects=False, timeout=(15, 30))
retData = r.text
except KeyboardInterrupt:
raise KeyboardInterrupt
except ValueError as e:
logger.debug('Failed to make request', error = e)
except requests.exceptions.RequestException as e:
logger.debug('Error: %s' % str(e))
retData = False
return retData
def getNistBeaconSalt(self, torPort=0, rounding=3600):
''' '''
Get the token for the current hour from the NIST randomness beacon Get the token for the current hour from the NIST randomness beacon
''' '''
@ -568,7 +606,7 @@ class OnionrUtils:
except IndexError: except IndexError:
raise onionrexceptions.MissingPort('Missing Tor socks port') raise onionrexceptions.MissingPort('Missing Tor socks port')
retData = '' retData = ''
curTime = self._core._utils.getCurrentHourEpoch curTime = self.getRoundedEpoch(rounding)
self.nistSaltTimestamp = curTime self.nistSaltTimestamp = curTime
data = self.doGetRequest('https://beacon.nist.gov/rest/record/' + str(curTime), port=torPort) data = self.doGetRequest('https://beacon.nist.gov/rest/record/' + str(curTime), port=torPort)
dataXML = minidom.parseString(data, forbid_dtd=True, forbid_entities=True, forbid_external=True) dataXML = minidom.parseString(data, forbid_dtd=True, forbid_entities=True, forbid_external=True)
@ -580,6 +618,19 @@ class OnionrUtils:
self.powSalt = retData self.powSalt = retData
return retData return retData
def strToBytes(self, data):
try:
data = data.encode()
except AttributeError:
pass
return data
def bytesToStr(self, data):
try:
data = data.decode()
except AttributeError:
pass
return data
def size(path='.'): def size(path='.'):
''' '''
Returns the size of a folder's contents in bytes Returns the size of a folder's contents in bytes

View File

@ -21,4 +21,4 @@
class OnionrValues: class OnionrValues:
def __init__(self): def __init__(self):
self.passwordLength = 20 self.passwordLength = 20
self.blockMetadataLengths = {'meta': 1000, 'sig': 88, 'signer': 64, 'time': 10, 'powRandomToken': '1000'} self.blockMetadataLengths = {'meta': 1000, 'sig': 200, 'signer': 200, 'time': 10, 'powRandomToken': 1000, 'encryptType': 4} #TODO properly refine values to minimum needed

315
onionr/pgpwords.py Normal file
View File

@ -0,0 +1,315 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*- (because 0xFF, even : "Yucatán")
import os, re, sys
_words = [
["aardvark", "adroitness"],
["absurd", "adviser"],
["accrue", "aftermath"],
["acme", "aggregate"],
["adrift", "alkali"],
["adult", "almighty"],
["afflict", "amulet"],
["ahead", "amusement"],
["aimless", "antenna"],
["Algol", "applicant"],
["allow", "Apollo"],
["alone", "armistice"],
["ammo", "article"],
["ancient", "asteroid"],
["apple", "Atlantic"],
["artist", "atmosphere"],
["assume", "autopsy"],
["Athens", "Babylon"],
["atlas", "backwater"],
["Aztec", "barbecue"],
["baboon", "belowground"],
["backfield", "bifocals"],
["backward", "bodyguard"],
["banjo", "bookseller"],
["beaming", "borderline"],
["bedlamp", "bottomless"],
["beehive", "Bradbury"],
["beeswax", "bravado"],
["befriend", "Brazilian"],
["Belfast", "breakaway"],
["berserk", "Burlington"],
["billiard", "businessman"],
["bison", "butterfat"],
["blackjack", "Camelot"],
["blockade", "candidate"],
["blowtorch", "cannonball"],
["bluebird", "Capricorn"],
["bombast", "caravan"],
["bookshelf", "caretaker"],
["brackish", "celebrate"],
["breadline", "cellulose"],
["breakup", "certify"],
["brickyard", "chambermaid"],
["briefcase", "Cherokee"],
["Burbank", "Chicago"],
["button", "clergyman"],
["buzzard", "coherence"],
["cement", "combustion"],
["chairlift", "commando"],
["chatter", "company"],
["checkup", "component"],
["chisel", "concurrent"],
["choking", "confidence"],
["chopper", "conformist"],
["Christmas", "congregate"],
["clamshell", "consensus"],
["classic", "consulting"],
["classroom", "corporate"],
["cleanup", "corrosion"],
["clockwork", "councilman"],
["cobra", "crossover"],
["commence", "crucifix"],
["concert", "cumbersome"],
["cowbell", "customer"],
["crackdown", "Dakota"],
["cranky", "decadence"],
["crowfoot", "December"],
["crucial", "decimal"],
["crumpled", "designing"],
["crusade", "detector"],
["cubic", "detergent"],
["dashboard", "determine"],
["deadbolt", "dictator"],
["deckhand", "dinosaur"],
["dogsled", "direction"],
["dragnet", "disable"],
["drainage", "disbelief"],
["dreadful", "disruptive"],
["drifter", "distortion"],
["dropper", "document"],
["drumbeat", "embezzle"],
["drunken", "enchanting"],
["Dupont", "enrollment"],
["dwelling", "enterprise"],
["eating", "equation"],
["edict", "equipment"],
["egghead", "escapade"],
["eightball", "Eskimo"],
["endorse", "everyday"],
["endow", "examine"],
["enlist", "existence"],
["erase", "exodus"],
["escape", "fascinate"],
["exceed", "filament"],
["eyeglass", "finicky"],
["eyetooth", "forever"],
["facial", "fortitude"],
["fallout", "frequency"],
["flagpole", "gadgetry"],
["flatfoot", "Galveston"],
["flytrap", "getaway"],
["fracture", "glossary"],
["framework", "gossamer"],
["freedom", "graduate"],
["frighten", "gravity"],
["gazelle", "guitarist"],
["Geiger", "hamburger"],
["glitter", "Hamilton"],
["glucose", "handiwork"],
["goggles", "hazardous"],
["goldfish", "headwaters"],
["gremlin", "hemisphere"],
["guidance", "hesitate"],
["hamlet", "hideaway"],
["highchair", "holiness"],
["hockey", "hurricane"],
["indoors", "hydraulic"],
["indulge", "impartial"],
["inverse", "impetus"],
["involve", "inception"],
["island", "indigo"],
["jawbone", "inertia"],
["keyboard", "infancy"],
["kickoff", "inferno"],
["kiwi", "informant"],
["klaxon", "insincere"],
["locale", "insurgent"],
["lockup", "integrate"],
["merit", "intention"],
["minnow", "inventive"],
["miser", "Istanbul"],
["Mohawk", "Jamaica"],
["mural", "Jupiter"],
["music", "leprosy"],
["necklace", "letterhead"],
["Neptune", "liberty"],
["newborn", "maritime"],
["nightbird", "matchmaker"],
["Oakland", "maverick"],
["obtuse", "Medusa"],
["offload", "megaton"],
["optic", "microscope"],
["orca", "microwave"],
["payday", "midsummer"],
["peachy", "millionaire"],
["pheasant", "miracle"],
["physique", "misnomer"],
["playhouse", "molasses"],
["Pluto", "molecule"],
["preclude", "Montana"],
["prefer", "monument"],
["preshrunk", "mosquito"],
["printer", "narrative"],
["prowler", "nebula"],
["pupil", "newsletter"],
["puppy", "Norwegian"],
["python", "October"],
["quadrant", "Ohio"],
["quiver", "onlooker"],
["quota", "opulent"],
["ragtime", "Orlando"],
["ratchet", "outfielder"],
["rebirth", "Pacific"],
["reform", "pandemic"],
["regain", "Pandora"],
["reindeer", "paperweight"],
["rematch", "paragon"],
["repay", "paragraph"],
["retouch", "paramount"],
["revenge", "passenger"],
["reward", "pedigree"],
["rhythm", "Pegasus"],
["ribcage", "penetrate"],
["ringbolt", "perceptive"],
["robust", "performance"],
["rocker", "pharmacy"],
["ruffled", "phonetic"],
["sailboat", "photograph"],
["sawdust", "pioneer"],
["scallion", "pocketful"],
["scenic", "politeness"],
["scorecard", "positive"],
["Scotland", "potato"],
["seabird", "processor"],
["select", "provincial"],
["sentence", "proximate"],
["shadow", "puberty"],
["shamrock", "publisher"],
["showgirl", "pyramid"],
["skullcap", "quantity"],
["skydive", "racketeer"],
["slingshot", "rebellion"],
["slowdown", "recipe"],
["snapline", "recover"],
["snapshot", "repellent"],
["snowcap", "replica"],
["snowslide", "reproduce"],
["solo", "resistor"],
["southward", "responsive"],
["soybean", "retraction"],
["spaniel", "retrieval"],
["spearhead", "retrospect"],
["spellbind", "revenue"],
["spheroid", "revival"],
["spigot", "revolver"],
["spindle", "sandalwood"],
["spyglass", "sardonic"],
["stagehand", "Saturday"],
["stagnate", "savagery"],
["stairway", "scavenger"],
["standard", "sensation"],
["stapler", "sociable"],
["steamship", "souvenir"],
["sterling", "specialist"],
["stockman", "speculate"],
["stopwatch", "stethoscope"],
["stormy", "stupendous"],
["sugar", "supportive"],
["surmount", "surrender"],
["suspense", "suspicious"],
["sweatband", "sympathy"],
["swelter", "tambourine"],
["tactics", "telephone"],
["talon", "therapist"],
["tapeworm", "tobacco"],
["tempest", "tolerance"],
["tiger", "tomorrow"],
["tissue", "torpedo"],
["tonic", "tradition"],
["topmost", "travesty"],
["tracker", "trombonist"],
["transit", "truncated"],
["trauma", "typewriter"],
["treadmill", "ultimate"],
["Trojan", "undaunted"],
["trouble", "underfoot"],
["tumor", "unicorn"],
["tunnel", "unify"],
["tycoon", "universe"],
["uncut", "unravel"],
["unearth", "upcoming"],
["unwind", "vacancy"],
["uproot", "vagabond"],
["upset", "vertigo"],
["upshot", "Virginia"],
["vapor", "visitor"],
["village", "vocalist"],
["virus", "voyager"],
["Vulcan", "warranty"],
["waffle", "Waterloo"],
["wallet", "whimsical"],
["watchword", "Wichita"],
["wayside", "Wilmington"],
["willow", "Wyoming"],
["woodlark", "yesteryear"],
["Zulu", "Yucatán"]]
hexre = re.compile("[a-fA-F0-9]+")
def wordify(seq):
seq = filter(lambda x: x not in (' ', '\n', '\t'), seq)
seq = "".join(seq) # Python3 compatibility
if not hexre.match(seq):
raise Exception("Input is not a valid hexadecimal value.")
if len(seq) % 2:
raise Exception("Input contains an odd number of bytes.")
ret = []
for i in range(0, len(seq), 2):
ret.append(_words[int(seq[i:i+2], 16)][(i//2)%2])
return ret
def usage():
print("Usage:")
print(" {0} [fingerprint...]".format(os.path.basename(sys.argv[0])))
print("")
print("If called with multiple arguments, they will be concatenated")
print("and treated as a single fingerprint.")
print("")
print("If called with no arguments, input is read from stdin,")
print("and each line is treated as a single fingerprint. In this")
print("mode, invalid values are silently ignored.")
exit(1)
if __name__ == '__main__':
if 1 == len(sys.argv):
fps = sys.stdin.readlines()
else:
fps = [" ".join(sys.argv[1:])]
for fp in fps:
try:
words = wordify(fp)
print("\n{0}: ".format(fp.strip()))
sys.stdout.write("\t")
for i in range(0, len(words)):
sys.stdout.write(words[i] + " ")
if (not (i+1) % 4) and not i == len(words)-1:
sys.stdout.write("\n\t")
print("")
except Exception as e:
if len(fps) == 1:
print (e)
usage()
print("")

View File

@ -0,0 +1 @@
https://3g2upl4pq6kufc4m.onion/robots.txt,http://expyuzz4wqqyqhjn.onion/robots.txt,https://onionr.voidnet.tech/

View File

@ -0,0 +1,5 @@
{
"name" : "flow",
"version" : "1.0",
"author" : "onionr"
}

View File

@ -0,0 +1,88 @@
'''
Onionr - P2P Microblogging Platform & Social network
This default plugin handles "flow" messages (global chatroom style communication)
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
# Imports some useful libraries
import logger, config, threading, time
from onionrblockapi import Block
plugin_name = 'flow'
class OnionrFlow:
def __init__(self):
self.myCore = pluginapi.get_core()
self.alreadyOutputed = []
self.flowRunning = False
return
def start(self):
message = ""
self.flowRunning = True
newThread = threading.Thread(target=self.showOutput)
newThread.start()
while self.flowRunning:
try:
message = logger.readline('\nInsert message into flow:').strip().replace('\n', '\\n').replace('\r', '\\r')
except EOFError:
pass
except KeyboardInterrupt:
self.flowRunning = False
if message == "q":
self.flowRunning = False
if len(message) > 0:
Block(content = message, type = 'txt', core = self.myCore).save()
logger.info("Flow is exiting, goodbye")
return
def showOutput(self):
while self.flowRunning:
for block in Block.getBlocks(type = 'txt', core = self.myCore):
if block.getHash() in self.alreadyOutputed:
continue
if not self.flowRunning:
break
logger.info('\n------------------------', prompt = False)
content = block.getContent()
# Escape new lines, remove trailing whitespace, and escape ansi sequences
content = self.myCore._utils.escapeAnsi(content.replace('\n', '\\n').replace('\r', '\\r').strip())
logger.info(block.getDate().strftime("%m/%d %H:%M") + ' - ' + logger.colors.reset + content, prompt = False)
self.alreadyOutputed.append(block.getHash())
try:
time.sleep(5)
except KeyboardInterrupt:
self.flowRunning = False
pass
def on_init(api, data = None):
'''
This event is called after Onionr is initialized, but before the command
inputted is executed. Could be called when daemon is starting or when
just the client is running.
'''
# Doing this makes it so that the other functions can access the api object
# by simply referencing the variable `pluginapi`.
global pluginapi
pluginapi = api
flow = OnionrFlow()
api.commands.register('flow', flow.start)
api.commands.register_help('flow', 'Open the flow messaging interface')
return

View File

@ -1,135 +0,0 @@
#!/usr/bin/python
'''
Onionr - P2P Microblogging Platform & Social network
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
# Imports some useful libraries
import logger, config, core
import os, sqlite3, threading
from onionrblockapi import Block
plugin_name = 'gui'
def send():
global message
block = Block()
block.setType('txt')
block.setContent(message)
logger.debug('Sent message in block %s.' % block.save(sign = True))
def sendMessage():
global sendEntry
global message
message = sendEntry.get()
t = threading.Thread(target = send)
t.start()
sendEntry.delete(0, len(message))
def update():
global listedBlocks, listbox, runningCheckDelayCount, runningCheckDelay, root, daemonStatus
for i in Block.getBlocks(type = 'txt'):
if i.getContent().strip() == '' or i.getHash() in listedBlocks:
continue
listbox.insert(99999, str(i.getContent()))
listedBlocks.append(i.getHash())
listbox.see(99999)
runningCheckDelayCount += 1
if runningCheckDelayCount == runningCheckDelay:
resp = pluginapi.daemon.local_command('ping')
if resp == 'pong':
daemonStatus.config(text = "Onionr Daemon Status: Running")
else:
daemonStatus.config(text = "Onionr Daemon Status: Not Running")
runningCheckDelayCount = 0
root.after(10000, update)
def reallyOpenGUI():
import tkinter
global root, runningCheckDelay, runningCheckDelayCount, scrollbar, listedBlocks, nodeInfo, keyInfo, idText, idEntry, pubKeyEntry, listbox, daemonStatus, sendEntry
root = tkinter.Tk()
root.title("Onionr GUI")
runningCheckDelay = 5
runningCheckDelayCount = 4
scrollbar = tkinter.Scrollbar(root)
scrollbar.pack(side=tkinter.RIGHT, fill=tkinter.Y)
listedBlocks = []
nodeInfo = tkinter.Frame(root)
keyInfo = tkinter.Frame(root)
hostname = pluginapi.get_onionr().get_hostname()
logger.debug('Onionr Hostname: %s' % hostname)
idText = hostname
idEntry = tkinter.Entry(nodeInfo)
tkinter.Label(nodeInfo, text = "Node Address: ").pack(side=tkinter.LEFT)
idEntry.pack()
idEntry.insert(0, idText.strip())
idEntry.configure(state="readonly")
nodeInfo.pack()
pubKeyEntry = tkinter.Entry(keyInfo)
tkinter.Label(keyInfo, text="Public key: ").pack(side=tkinter.LEFT)
pubKeyEntry.pack()
pubKeyEntry.insert(0, pluginapi.get_core()._crypto.pubKey)
pubKeyEntry.configure(state="readonly")
keyInfo.pack()
sendEntry = tkinter.Entry(root)
sendBtn = tkinter.Button(root, text='Send Message', command=sendMessage)
sendEntry.pack(side=tkinter.TOP, pady=5)
sendBtn.pack(side=tkinter.TOP)
listbox = tkinter.Listbox(root, yscrollcommand=tkinter.Scrollbar.set, height=15)
listbox.pack(fill=tkinter.BOTH, pady=25)
daemonStatus = tkinter.Label(root, text="Onionr Daemon Status: unknown")
daemonStatus.pack()
scrollbar.config(command=tkinter.Listbox.yview)
root.after(2000, update)
root.mainloop()
def openGUI():
t = threading.Thread(target = reallyOpenGUI)
t.daemon = False
t.start()
def on_init(api, data = None):
global pluginapi
pluginapi = api
api.commands.register(['gui', 'launch-gui', 'open-gui'], openGUI)
api.commands.register_help('gui', 'Opens a graphical interface for Onionr')
return

View File

@ -1,5 +1,5 @@
{ {
"name" : "gui", "name" : "pms",
"version" : "1.0", "version" : "1.0",
"author" : "onionr" "author" : "onionr"
} }

View File

@ -0,0 +1,200 @@
'''
Onionr - P2P Microblogging Platform & Social network
This default plugin handles private messages in an email like fashion
'''
'''
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
'''
# Imports some useful libraries
import logger, config, threading, time, readline, datetime
from onionrblockapi import Block
import onionrexceptions
import locale
locale.setlocale(locale.LC_ALL, '')
plugin_name = 'pms'
PLUGIN_VERSION = '0.0.1'
def draw_border(text):
#https://stackoverflow.com/a/20757491
lines = text.splitlines()
width = max(len(s) for s in lines)
res = ['' + '' * width + '']
for s in lines:
res.append('' + (s + ' ' * width)[:width] + '')
res.append('' + '' * width + '')
return '\n'.join(res)
class MailStrings:
def __init__(self, mailInstance):
self.mailInstance = mailInstance
self.programTag = 'OnionrMail v%s' % (PLUGIN_VERSION)
choices = ['view inbox', 'view sentbox', 'send message', 'help', 'quit']
self.mainMenuChoices = choices
self.mainMenu = '''\n
-----------------
1. %s
2. %s
3. %s
4. %s
5. %s''' % (choices[0], choices[1], choices[2], choices[3], choices[4])
class OnionrMail:
def __init__(self, pluginapi):
self.myCore = pluginapi.get_core()
#self.dataFolder = pluginapi.get_data_folder()
self.strings = MailStrings(self)
return
def inbox(self):
blockCount = 0
pmBlockMap = {}
pmBlocks = {}
logger.info('Decrypting messages...')
choice = ''
# this could use a lot of memory if someone has recieved a lot of messages
for blockHash in self.myCore.getBlocksByType('pm'):
pmBlocks[blockHash] = Block(blockHash, core=self.myCore)
pmBlocks[blockHash].decrypt()
while choice not in ('-q', 'q', 'quit'):
blockCount = 0
for blockHash in pmBlocks:
if not pmBlocks[blockHash].decrypted:
continue
blockCount += 1
pmBlockMap[blockCount] = blockHash
blockDate = pmBlocks[blockHash].getDate().strftime("%m/%d %H:%M")
print('%s. %s: %s' % (blockCount, blockDate, blockHash))
try:
choice = logger.readline('Enter a block number, -r to refresh, or -q to stop: ').strip().lower()
except (EOFError, KeyboardInterrupt):
choice = '-q'
if choice in ('-q', 'q', 'quit'):
continue
if choice in ('-r', 'r', 'refresh'):
# dirty hack
self.inbox()
return
try:
choice = int(choice)
except ValueError:
pass
else:
try:
pmBlockMap[choice]
readBlock = pmBlocks[pmBlockMap[choice]]
except KeyError:
pass
else:
cancel = ''
readBlock.verifySig()
print('Message recieved from %s' % (readBlock.signer,))
print('Valid signature:', readBlock.validSig)
if not readBlock.validSig:
logger.warn('This message has an INVALID signature. ANYONE could have sent this message.')
cancel = logger.readline('Press enter to continue to message, or -q to not open the message (recommended).')
if cancel != '-q':
print(draw_border(self.myCore._utils.escapeAnsi(readBlock.bcontent.decode().strip())))
return
def draftMessage(self):
message = ''
newLine = ''
recip = ''
entering = True
while entering:
try:
recip = logger.readline('Enter peer address, or q to stop:').strip()
if recip in ('-q', 'q'):
raise EOFError
if not self.myCore._utils.validatePubKey(recip):
raise onionrexceptions.InvalidPubkey('Must be a valid ed25519 base32 encoded public key')
except onionrexceptions.InvalidPubkey:
logger.warn('Invalid public key')
except (KeyboardInterrupt, EOFError):
entering = False
else:
break
else:
# if -q or ctrl-c/d, exit function here, otherwise we successfully got the public key
return
print('Enter your message, stop by entering -q on a new line.')
while newLine != '-q':
try:
newLine = input()
except (KeyboardInterrupt, EOFError):
pass
if newLine == '-q':
continue
newLine += '\n'
message += newLine
print('Inserting encrypted message as Onionr block....')
self.myCore.insertBlock(message, header='pm', encryptType='asym', asymPeer=recip, sign=True)
def menu(self):
choice = ''
while True:
print(self.strings.programTag + '\n\nOur ID: ' + self.myCore._crypto.pubKey + self.strings.mainMenu.title()) # print out main menu
try:
choice = logger.readline('Enter 1-%s:\n' % (len(self.strings.mainMenuChoices))).lower().strip()
except (KeyboardInterrupt, EOFError):
choice = '5'
if choice in (self.strings.mainMenuChoices[0], '1'):
self.inbox()
elif choice in (self.strings.mainMenuChoices[1], '2'):
logger.warn('not implemented yet')
elif choice in (self.strings.mainMenuChoices[2], '3'):
self.draftMessage()
elif choice in (self.strings.mainMenuChoices[3], '4'):
logger.warn('not implemented yet')
elif choice in (self.strings.mainMenuChoices[4], '5'):
logger.info('Goodbye.')
break
elif choice == '':
pass
else:
logger.warn('Invalid choice.')
return
def on_init(api, data = None):
'''
This event is called after Onionr is initialized, but before the command
inputted is executed. Could be called when daemon is starting or when
just the client is running.
'''
pluginapi = api
mail = OnionrMail(pluginapi)
api.commands.register(['mail'], mail.menu)
api.commands.register_help('mail', 'Interact with OnionrMail')
return

View File

@ -3,10 +3,25 @@
"dev_mode": true, "dev_mode": true,
"display_header" : true, "display_header" : true,
"newCommunicator": false, "direct_connect" : {
"respond" : true,
"execute_callbacks" : true
}
},
"dc_response": true, "www" : {
"dc_execcallbacks" : true "public" : {
"run" : true
},
"private" : {
"run" : true
},
"ui" : {
"run" : true,
"private" : true
}
}, },
"client" : { "client" : {
@ -26,7 +41,7 @@
}, },
"tor" : { "tor" : {
"v3onions": false
}, },
"i2p":{ "i2p":{
@ -36,9 +51,14 @@
}, },
"allocations":{ "allocations":{
"disk": 1000000000, "disk": 9000000000,
"netTotal": 1000000000, "netTotal": 1000000000,
"blockCache" : 5000000, "blockCache": 5000000,
"blockCacheTotal" : 50000000 "blockCacheTotal": 50000000
},
"peers":{
"minimumScore": -100,
"maxStoredPeers": 500,
"maxConnect": 5
} }
} }

View File

@ -18,7 +18,7 @@ P ::: :::: ::::::: :::: :::: W:: :: :: ::: :: :: :: :: :::: :::::
P ::: ::::: :::::: :::: :::: W:: :: :: ::: :: :: :: :: ::: :: ::: P ::: ::::: :::::: :::: :::: W:: :: :: ::: :: :: :: :: ::: :: :::
P :::: ::::: ::::: ::: W :::: :: :: :: ::::: :: :: :: :: P :::: ::::: ::::: ::: W :::: :: :: :: ::::: :: :: :: ::
P :::: :::::: :::::: :::: P :::: :::::: :::::: ::::
P :::: :::::::::::: :::: P :::: :::::::::::: :::: GvPBV
P ::::: :::::::: :::: P ::::: :::::::: ::::
P ::::: :::::: P ::::: ::::::
P :::::::::::::::: P ::::::::::::::::

View File

@ -1,5 +1,7 @@
<h1>This is an Onionr Node</h1> <h1>This is an Onionr Node</h1>
<p>The content on this server is not necessarily created or intentionally stored by the owner of the server.</p> <p>The content on this server is not necessarily created by the server owner, and was not necessarily stored with the owner's knowledge.</p>
<p>Onionr is a decentralized, distributed data storage system, that anyone can insert data into.</p>
<p>To learn more about Onionr, see the website at <a href="https://onionr.voidnet.tech/">https://Onionr.VoidNet.tech/</a></p> <p>To learn more about Onionr, see the website at <a href="https://onionr.voidnet.tech/">https://Onionr.VoidNet.tech/</a></p>

View File

@ -0,0 +1,44 @@
# Onionr UI
## About
The default GUI for Onionr
## Setup
To compile the application, simply execute the following:
```
python3 compile.py
```
If you are wanting to compile Onionr UI for another language, execute the following, replacing `[lang]` with the target language (supported languages include `eng` for English, `spa` para español, and `zho`为中国人):
```
python3 compile.py [lang]
```
## FAQ
### Why "compile" anyway?
This web application is compiled for a few reasons:
1. To make it easier to update; this way, we do not have to update the header in every file if we want to change something about it.
2. To make the application smaller in size; there is less duplicated code when the code like the header and footer can be stored in an individual file rather than every file.
3. For multi-language support; with the Python "tags" feature, we can reference strings by variable name, and based on a language file, they can be dynamically inserted into the page on compilation.
4. For compile-time customizations.
### What exactly happens when you compile?
Upon compilation, files from the `src/` directory will be copied to `dist/` directory, header and footers will be injected in the proper places, and Python "tags" will be interpreted.
### How do Python "tags" work?
There are two types of Python "tags":
1. Logic tags (`<$ logic $>`): These tags allow you to perform logic at compile time. Example: `<$ import datetime; lastUpdate = datetime.datetime.now() $>`: This gets the current time while compiling, then stores it in `lastUpdate`.
2. Data tags (`<$= data $>`): These tags take whatever the return value of the statement in the tags is, and write it directly to the page. Example: `<$= 'This application was compiled at %s.' % lastUpdate $>`: This will write the message in the string in the tags to the page.
**Note:** Logic tags take a higher priority and will always be interpreted first.
### How does the language feature work?
When you use a data tag to write a string to the page (e.g. `<$= LANG.HELLO_WORLD $>`), the language feature simply takes dictionary of the language that is currently being used from the language map file (`lang.json`), then searches for the key (being the variable name after the characters `LANG.` in the data tag, like `HELLO_WORLD` from the example before). It then writes that string to the page. Language variables are always prefixed with `LANG.` and should always be uppercase (as they are a constant).
### I changed a few things in the application and tried to view the updates in my browser, but nothing changed!
You most likely forgot to compile. Try running `python3 compile.py` and check again. If you are still having issues, [open up an issue](https://gitlab.com/beardog/Onionr/issues/new?issue[title]=Onionr UI not updating after compiling).

View File

@ -0,0 +1,4 @@
<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.3/umd/popper.min.js" integrity="sha384-ZMP7rVo3mIykV+2+9J3UJ46jBk0WLaUAdn689aCwoqbBJiSnjAK/l8WvCWPIPm49" crossorigin="anonymous"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/js/bootstrap.min.js" integrity="sha384-ChfqqxuZUCnJSK3+MXmPNIyE6ZbWh2IMqE241rYiqJxyMiZ6OW/JmZQ5stwEULTy" crossorigin="anonymous"></script>
<script src="js/main.js"></script>

View File

@ -0,0 +1,30 @@
<title><$= LANG.ONIONR_TITLE $></title>
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
<link rel="stylesheet" type="text/css" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous" />
<link rel="stylesheet" type="text/css" href="css/main.css" />
<link rel="stylesheet" type="text/css" href="css/themes/dark.css" />
<nav class="navbar navbar-expand-lg navbar-dark bg-dark fixed-top">
<a class="navbar-brand" href="#">Onionr</a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarSupportedContent">
<ul class="navbar-nav mr-auto">
<li class="nav-item active">
<a class="nav-link" href="index.html"><$= LANG.TIMELINE $></a>
</li>
<li class="nav-item">
<a class="nav-link" href="notifications.html"><$= LANG.NOTIFICATIONS $></a>
</li>
<li class="nav-item">
<a class="nav-link" href="messages.html"><$= LANG.MESSAGES $></a>
</li>
</ul>
</div>
</nav>

View File

@ -0,0 +1,32 @@
<!-- POST -->
<div class="col-12">
<div class="onionr-post">
<div class="row">
<div class="col-2">
<img class="onionr-post-user-icon" src="$user-image">
</div>
<div class="col-10">
<div class="row">
<div class="col col-auto">
<a class="onionr-post-user-name" href="#!" onclick="viewProfile('$user-id-url', '$user-name-url')">$user-name</a>
<a class="onionr-post-user-id" href="#!" onclick="viewProfile('$user-id-url', '$user-name-url')" data-placement="top" data-toggle="tooltip" title="$user-id">$user-id-truncated</a>
</div>
<div class="col col-auto text-right ml-auto pl-0">
<div class="onionr-post-date text-right" data-placement="top" data-toggle="tooltip" title="$date">$date-relative</div>
</div>
</div>
<div class="onionr-post-content">
$content
</div>
<div class="onionr-post-controls pt-2">
<a href="#!" onclick="toggleLike('$post-id')" class="glyphicon glyphicon-heart mr-2"><$= LANG.POST_LIKE $></a>
<a href="#!" onclick="reply('$post-id')" class="glyphicon glyphicon-comment mr-2"><$= LANG.POST_REPLY $></a>
</div>
</div>
</div>
</div>
</div>
<!-- END POST -->

View File

@ -0,0 +1,130 @@
#!/usr/bin/python3
import shutil, os, re, json, traceback
# get user's config
settings = {}
with open('config.json', 'r') as file:
settings = json.loads(file.read())
# "hardcoded" config, not for user to mess with
HEADER_FILE = 'common/header.html'
FOOTER_FILE = 'common/footer.html'
SRC_DIR = 'src/'
DST_DIR = 'dist/'
HEADER_STRING = '<header />'
FOOTER_STRING = '<footer />'
# remove dst folder
shutil.rmtree(DST_DIR, ignore_errors=True)
# taken from https://stackoverflow.com/questions/1868714/how-do-i-copy-an-entire-directory-of-files-into-an-existing-directory-using-pyth
def copytree(src, dst, symlinks=False, ignore=None):
for item in os.listdir(src):
s = os.path.join(src, item)
d = os.path.join(dst, item)
if os.path.isdir(s):
shutil.copytree(s, d, symlinks, ignore)
else:
shutil.copy2(s, d)
# copy src to dst
copytree(SRC_DIR, DST_DIR, False)
# load in lang map
langmap = {}
with open('lang.json', 'r') as file:
langmap = json.loads(file.read())[settings['language']]
LANG = type('LANG', (), langmap)
# templating
class Template:
def jsTemplate(template):
with open('common/%s.html' % template, 'r') as file:
return Template.parseTags(file.read().replace('\\', '\\\\').replace('\'', '\\\'').replace('\n', "\\\n"))
def htmlTemplate(template):
with open('common/%s.html' % template, 'r') as file:
return Template.parseTags(file.read())
# tag parser
def parseTags(contents):
# <$ logic $>
for match in re.findall(r'(<\$(?!=)(.*?)\$>)', contents):
try:
out = exec(match[1].strip())
contents = contents.replace(match[0], '' if out is None else str(out))
except Exception as e:
print('Error: Failed to execute python tag (%s): %s\n' % (filename, match[1]))
traceback.print_exc()
print('\nIgnoring this error, continuing to compile...\n')
# <$= data $>
for match in re.findall(r'(<\$=(.*?)\$>)', contents):
try:
out = eval(match[1].strip())
contents = contents.replace(match[0], '' if out is None else str(out))
except NameError as e:
name = match[1].strip()
print('Warning: %s does not exist, treating as an str' % name)
contents = contents.replace(match[0], name)
except Exception as e:
print('Error: Failed to execute python tag (%s): %s\n' % (filename, match[1]))
traceback.print_exc()
print('\nIgnoring this error, continuing to compile...\n')
return contents
def jsTemplate(contents):
return Template.jsTemplate(contents)
def htmlTemplate(contents):
return Template.htmlTemplate(contents)
# get header file
with open(HEADER_FILE, 'r') as file:
HEADER_FILE = file.read()
if settings['python_tags']:
HEADER_FILE = Template.parseTags(HEADER_FILE)
# get footer file
with open(FOOTER_FILE, 'r') as file:
FOOTER_FILE = file.read()
if settings['python_tags']:
FOOTER_FILE = Template.parseTags(FOOTER_FILE)
# iterate dst, replace files
def iterate(directory):
for filename in os.listdir(directory):
if filename.split('.')[-1].lower() in ['htm', 'html', 'css', 'js']:
try:
path = os.path.join(directory, filename)
if os.path.isdir(path):
iterate(path)
else:
contents = ''
with open(path, 'r') as file:
# get file contents
contents = file.read()
os.remove(path)
with open(path, 'w') as file:
# set the header & footer
contents = contents.replace(HEADER_STRING, HEADER_FILE)
contents = contents.replace(FOOTER_STRING, FOOTER_FILE)
# do python tags
if settings['python_tags']:
contents = Template.parseTags(contents)
# write file
file.write(contents)
except Exception as e:
print('Error: Failed to parse file: %s\n' % filename)
traceback.print_exc()
print('\nIgnoring this error, continuing to compile...\n')
iterate(DST_DIR)

View File

@ -0,0 +1,4 @@
{
"language" : "eng",
"python_tags" : true
}

View File

@ -0,0 +1,79 @@
/* general formatting */
@media (min-width: 768px) {
.container-small {
width: 300px;
}
.container-large {
width: 970px;
}
}
@media (min-width: 992px) {
.container-small {
width: 500px;
}
.container-large {
width: 1170px;
}
}
@media (min-width: 1200px) {
.container-small {
width: 700px;
}
.container-large {
width: 1500px;
}
}
.container-small, .container-large {
max-width: 100%;
}
/* navbar */
body {
margin-top: 5rem;
}
/* timeline */
.onionr-post {
padding: 1rem;
margin-bottom: 1rem;
width: 100%;
}
.onionr-post-user-name {
display: inline;
}
.onionr-post-user-id:before { content: "("; }
.onionr-post-user-id:after { content: ")"; }
.onionr-post-content {
word-wrap: break-word;
}
.onionr-post-user-icon {
border-radius: 100%;
width: 100%;
}
.h-divider {
margin: 5px 15px;
height: 1px;
width: 100%;
}
/* profile */
.onionr-profile-user-icon {
border-radius: 100%;
width: 100%;
margin-bottom: 1rem;
}
.onionr-profile-username {
text-align: center;
}

View File

@ -0,0 +1,36 @@
body {
background-color: #96928f;
color: #25383C;
}
/* timeline */
.onionr-post {
border: 1px solid black;
border-radius: 1rem;
background-color: lightgray;
}
.onionr-post-user-name {
color: green;
font-weight: bold;
}
.onionr-post-user-id {
color: gray;
}
.onionr-post-date {
color: gray;
}
.onionr-post-content {
font-family: sans-serif, serif;
border-top: 1px solid black;
font-size: 15pt;
}
.h-divider {
border-top:1px solid gray;
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.6 KiB

View File

@ -0,0 +1,79 @@
<!DOCTYPE html>
<html>
<head>
<title>Onionr UI</title>
<meta name="viewport" content="width=device-width, initial-scale=1" />
<meta http-equiv="content-type" content="text/html; charset=utf-8" />
<link rel="stylesheet" type="text/css" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous" />
<link rel="stylesheet" type="text/css" href="css/main.css" />
<link rel="stylesheet" type="text/css" href="css/themes/dark.css" />
<nav class="navbar navbar-expand-lg navbar-dark bg-dark fixed-top">
<a class="navbar-brand" href="#">Onionr</a>
<button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarSupportedContent">
<ul class="navbar-nav mr-auto">
<li class="nav-item active">
<a class="nav-link" href="index.html">Timeline</a>
</li>
<li class="nav-item">
<a class="nav-link" href="notifications.html">Notifications</a>
</li>
<li class="nav-item">
<a class="nav-link" href="messages.html">Messages</a>
</li>
</ul>
</div>
</nav>
</head>
<body>
<div class="container">
<div class="row">
<div class="col-12 col-lg-3">
<div class="onionr-profile">
<div class="row">
<div class="col-4 col-lg-12">
<img id="onionr-profile-user-icon" class="onionr-profile-user-icon" src="img/default.png">
</div>
<div class="col-8 col-lg-12">
<h2 id="onionr-profile-username" class="onionr-profile-username text-left text-lg-center text-sm-left">arinerron</h2>
</div>
</div>
</div>
</div>
<div class="h-divider pb-3 d-block d-lg-none"></div>
<div class="col-sm-12 col-lg-6">
<div class="row" id="onionr-timeline-posts">
</div>
</div>
<div class="d-none d-lg-block col-lg-3">
<div class="row">
<div class="col-12">
<div class="onionr-trending">
<h2>Trending</h2>
</div>
</div>
</div>
</div>
</div>
</div>
<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js" integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo" crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.3/umd/popper.min.js" integrity="sha384-ZMP7rVo3mIykV+2+9J3UJ46jBk0WLaUAdn689aCwoqbBJiSnjAK/l8WvCWPIPm49" crossorigin="anonymous"></script>
<script src="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/js/bootstrap.min.js" integrity="sha384-ChfqqxuZUCnJSK3+MXmPNIyE6ZbWh2IMqE241rYiqJxyMiZ6OW/JmZQ5stwEULTy" crossorigin="anonymous"></script>
<script src="js/main.js"></script>
<script src="js/timeline.js"></script>
</body>
</html>

View File

@ -0,0 +1,451 @@
/* handy localstorage functions for quick usage */
function set(key, val) {
return localStorage.setItem(key, val);
}
function get(key, df) { // df is default
value = localStorage.getItem(key);
if(value == null)
value = df;
return value;
}
function remove(key) {
return localStorage.removeItem(key);
}
var usermap = JSON.parse(get('usermap', '{}'));
function getUserMap() {
return usermap;
}
function deserializeUser(id) {
var serialized = getUserMap()[id]
var user = new User();
user.setName(serialized['name']);
user.setID(serialized['id']);
user.setIcon(serialized['icon']);
}
/* returns a relative date format, e.g. "5 minutes" */
function timeSince(date, size) {
// taken from https://stackoverflow.com/a/3177838/3678023
var seconds = Math.floor((new Date() - date) / 1000);
var interval = Math.floor(seconds / 31536000);
if (size === null)
size = 'desktop';
var dates = {
'mobile' : {
'yr' : 'yrs',
'mo' : 'mo',
'd' : 'd',
'hr' : 'h',
'min' : 'm',
'secs' : 's',
'sec' : 's',
},
'desktop' : {
'yr' : ' years',
'mo' : ' months',
'd' : ' days',
'hr' : ' hours',
'min' : ' minutes',
'secs' : ' seconds',
'sec' : ' second',
},
};
if (interval > 1)
return interval + dates[size]['yr'];
interval = Math.floor(seconds / 2592000);
if (interval > 1)
return interval + dates[size]['mo'];
interval = Math.floor(seconds / 86400);
if (interval > 1)
return interval + dates[size]['d'];
interval = Math.floor(seconds / 3600);
if (interval > 1)
return interval + dates[size]['hr'];
interval = Math.floor(seconds / 60);
if (interval > 1)
return interval + dates[size]['min'];
if(Math.floor(seconds) !== 1)
return Math.floor(seconds) + dates[size]['secs'];
return '1' + dates[size]['sec'];
}
/* replace all instances of string */
String.prototype.replaceAll = function(search, replacement) {
// taken from https://stackoverflow.com/a/17606289/3678023
var target = this;
return target.split(search).join(replacement);
};
/* useful functions to sanitize data */
class Sanitize {
/* sanitizes HTML in a string */
static html(html) {
return String(html).replace(/&/g, '&amp;').replace(/</g, '&lt;').replace(/>/g, '&gt;').replace(/"/g, '&quot;');
}
/* URL encodes a string */
static url(url) {
return encodeURIComponent(url);
}
}
/* config stuff */
function getWebPassword() {
return get("web-password", null);
}
function setWebPassword(password) {
return set("web-password", password);
}
function getTimingToken() {
return get("timing-token", null);
}
function setTimingToken(token) {
return set("timing-token", token);
}
/* user class */
class User {
constructor() {
this.name = 'Unknown';
this.id = 'unknown';
this.image = 'img/default.png';
}
setName(name) {
this.name = name;
}
getName() {
return this.name;
}
setID(id) {
this.id = id;
}
getID() {
return this.id;
}
setIcon(image) {
this.image = image;
}
getIcon() {
return this.image;
}
serialize() {
return {
'name' : this.getName(),
'id' : this.getID(),
'icon' : this.getIcon()
};
}
remember() {
usermap[this.getID()] = this.serialize();
set('usermap', JSON.stringify(usermap));
}
}
/* post class */
class Post {
/* returns the html content of a post */
getHTML() {
var postTemplate = '<!-- POST -->\
<div class="col-12">\
<div class="onionr-post">\
<div class="row">\
<div class="col-2">\
<img class="onionr-post-user-icon" src="$user-image">\
</div>\
<div class="col-10">\
<div class="row">\
<div class="col col-auto">\
<a class="onionr-post-user-name" href="#!" onclick="viewProfile(\'$user-id-url\', \'$user-name-url\')">$user-name</a>\
<a class="onionr-post-user-id" href="#!" onclick="viewProfile(\'$user-id-url\', \'$user-name-url\')" data-placement="top" data-toggle="tooltip" title="$user-id">$user-id-truncated</a>\
</div>\
\
<div class="col col-auto text-right ml-auto pl-0">\
<div class="onionr-post-date text-right" data-placement="top" data-toggle="tooltip" title="$date">$date-relative</div>\
</div>\
</div>\
\
<div class="onionr-post-content">\
$content\
</div>\
\
<div class="onionr-post-controls pt-2">\
<a href="#!" onclick="toggleLike(\'$post-id\')" class="glyphicon glyphicon-heart mr-2">like</a>\
<a href="#!" onclick="reply(\'$post-id\')" class="glyphicon glyphicon-comment mr-2">reply</a>\
</div>\
</div>\
</div>\
</div>\
</div>\
<!-- END POST -->\
';
var device = (jQuery(document).width() < 768 ? 'mobile' : 'desktop');
postTemplate = postTemplate.replaceAll('$user-name-url', Sanitize.html(Sanitize.url(this.getUser().getName())));
postTemplate = postTemplate.replaceAll('$user-name', Sanitize.html(this.getUser().getName()));
postTemplate = postTemplate.replaceAll('$user-id-url', Sanitize.html(Sanitize.url(this.getUser().getID())));
postTemplate = postTemplate.replaceAll('$user-id-truncated', Sanitize.html(this.getUser().getID().substring(0, 12) + '...'));
// postTemplate = postTemplate.replaceAll('$user-id-truncated', Sanitize.html(this.getUser().getID().split('-').slice(0, 4).join('-')));
postTemplate = postTemplate.replaceAll('$user-id', Sanitize.html(this.getUser().getID()));
postTemplate = postTemplate.replaceAll('$user-image', Sanitize.html(this.getUser().getIcon()));
postTemplate = postTemplate.replaceAll('$content', Sanitize.html(this.getContent()));
postTemplate = postTemplate.replaceAll('$date-relative', timeSince(this.getPostDate(), device) + (device === 'desktop' ? ' ago' : ''));
postTemplate = postTemplate.replaceAll('$date', this.getPostDate().toLocaleString());
return postTemplate;
}
setUser(user) {
this.user = user;
}
getUser() {
return this.user;
}
setContent(content) {
this.content = content;
}
getContent() {
return this.content;
}
setPostDate(date) { // unix timestamp input
if(date instanceof Date)
this.date = date;
else
this.date = new Date(date * 1000);
}
getPostDate() {
return this.date;
}
}
/* block class */
class Block {
constructor(type, content) {
this.type = type;
this.content = content;
}
// returns the block hash, if any
getHash() {
return this.hash;
}
// returns the block type
getType() {
return this.type;
}
// returns the block header
getHeader(key, df) { // df is default
if(key !== undefined) {
if(this.getHeader().hasOwnProperty(key))
return this.getHeader()[key];
else
return (df === undefined ? null : df);
} else
return this.header;
}
// returns the block metadata
getMetadata(key, df) { // df is default
if(key !== undefined) {
if(this.getMetadata().hasOwnProperty(key))
return this.getMetadata()[key];
else
return (df === undefined ? null : df);
} else
return this.metadata;
}
// returns the block content
getContent() {
return this.content;
}
// returns the parent block's hash (not Block object, for performance)
getParent() {
if(!(this.parent instanceof Block) && this.parent !== undefined && this.parent !== null)
this.parent = Block.openBlock(this.parent); // convert hash to Block object
return this.parent;
}
// returns the date that the block was received
getDate() {
return this.date;
}
// returns a boolean that indicates whether or not the block is valid
isValid() {
return this.valid;
}
// returns a boolean thati ndicates whether or not the block is signed
isSigned() {
return this.signed;
}
// returns the block signature
getSignature() {
return this.signature;
}
// returns the block type
setType(type) {
this.type = type;
return this;
}
// sets block metadata by key
setMetadata(key, val) {
this.metadata[key] = val;
return this;
}
// sets block content
setContent(content) {
this.content = content;
return this;
}
// sets the block parent by hash or Block object
setParent(parent) {
this.parent = parent;
return this;
}
// indicates if the Block exists or not
exists() {
return !(this.hash === null || this.hash === undefined);
}
/* static functions */
// recreates a block by hash
static openBlock(hash) {
return parseBlock(response);
}
// converts an associative array to a Block
static parseBlock(val) {
var block = new Block();
block.type = val['type'];
block.content = val['content'];
block.header = val['header'];
block.metadata = val['metadata'];
block.date = new Date(val['date'] * 1000);
block.hash = val['hash'];
block.signature = val['signature'];
block.signed = val['signed'];
block.valid = val['valid'];
block.parent = val['parent'];
if(block.getParent() !== null) {
// if the block data is already in the associative array
/*
if (blocks.hasOwnProperty(block.getParent()))
block.setParent(Block.parseAssociativeArray({blocks[block.getParent()]})[0]);
*/
}
return block;
}
// converts an array of associative arrays to an array of Blocks
static parseBlockArray(blocks) {
var outputBlocks = [];
for(var key in blocks) {
if(blocks.hasOwnProperty(key)) {
var val = blocks[key];
var block = Block.parseBlock(val);
outputBlocks.push(block);
}
}
return outputBlocks;
}
static getBlocks(args, callback) { // callback is optional
args = args || {}
var url = '/client/?action=searchBlocks&data=' + Sanitize.url(JSON.stringify(args)) + '&token=' + Sanitize.url(getWebPassword()) + '&timingToken=' + Sanitize.url(getTimingToken());
console.log(url);
var http = new XMLHttpRequest();
if(callback !== undefined) {
// async
http.addEventListener('load', function() {
callback(Block.parseBlockArray(JSON.parse(http.responseText)['blocks']));
}, false);
http.open('GET', url, true);
http.timeout = 5000;
http.send(null);
} else {
// sync
http.open('GET', url, false);
http.send(null);
return Block.parseBlockArray(JSON.parse(http.responseText)['blocks']);
}
}
}
/* temporary code */
if(getWebPassword() === null) {
var password = "";
while(password.length != 64) {
password = prompt("Please enter the web password (run `./RUN-LINUX.sh --get-password`)");
}
setTimingToken(prompt("Please enter the timing token (optional)"));
setWebPassword(password);
window.location.reload(true);
}

View File

@ -0,0 +1,27 @@
/* just for testing rn */
Block.getBlocks({'type' : 'onionr-post', 'signed' : true, 'reverse' : true}, function(data) {
for(var i = 0; i < data.length; i++) {
try {
var block = data[i];
var post = new Post();
var user = new User();
var blockContent = JSON.parse(block.getContent());
user.setName('unknown');
user.setID(new String(block.getHeader('signer', 'unknown')));
post.setContent(blockContent['content']);
post.setPostDate(block.getDate());
post.setUser(user);
document.getElementById('onionr-timeline-posts').innerHTML += post.getHTML();
} catch(e) {
console.log(e);
}
}
});
function viewProfile(id, name) {
document.getElementById("onionr-profile-username").innerHTML = Sanitize.html(decodeURIComponent(name));
}

View File

@ -0,0 +1,40 @@
{
"eng" : {
"ONIONR_TITLE" : "Onionr UI",
"TIMELINE" : "Timeline",
"NOTIFICATIONS" : "Notifications",
"MESSAGES" : "Messages",
"TRENDING" : "Trending",
"POST_LIKE" : "like",
"POST_REPLY" : "reply"
},
"spa" : {
"ONIONR_TITLE" : "Onionr UI",
"TIMELINE" : "Linea de Tiempo",
"NOTIFICATIONS" : "Notificaciones",
"MESSAGES" : "Mensaje",
"TRENDING" : "Trending",
"POST_LIKE" : "me gusta",
"POST_REPLY" : "comentario"
},
"zho" : {
"ONIONR_TITLE" : "洋葱 用户界面",
"TIMELINE" : "时间线",
"NOTIFICATIONS" : "通知",
"MESSAGES" : "消息",
"TRENDING" : "趋势",
"POST_LIKE" : "喜欢",
"POST_REPLY" : "回复"
}
}

View File

@ -0,0 +1,79 @@
/* general formatting */
@media (min-width: 768px) {
.container-small {
width: 300px;
}
.container-large {
width: 970px;
}
}
@media (min-width: 992px) {
.container-small {
width: 500px;
}
.container-large {
width: 1170px;
}
}
@media (min-width: 1200px) {
.container-small {
width: 700px;
}
.container-large {
width: 1500px;
}
}
.container-small, .container-large {
max-width: 100%;
}
/* navbar */
body {
margin-top: 5rem;
}
/* timeline */
.onionr-post {
padding: 1rem;
margin-bottom: 1rem;
width: 100%;
}
.onionr-post-user-name {
display: inline;
}
.onionr-post-user-id:before { content: "("; }
.onionr-post-user-id:after { content: ")"; }
.onionr-post-content {
word-wrap: break-word;
}
.onionr-post-user-icon {
border-radius: 100%;
width: 100%;
}
.h-divider {
margin: 5px 15px;
height: 1px;
width: 100%;
}
/* profile */
.onionr-profile-user-icon {
border-radius: 100%;
width: 100%;
margin-bottom: 1rem;
}
.onionr-profile-username {
text-align: center;
}

View File

@ -0,0 +1,36 @@
body {
background-color: #96928f;
color: #25383C;
}
/* timeline */
.onionr-post {
border: 1px solid black;
border-radius: 1rem;
background-color: lightgray;
}
.onionr-post-user-name {
color: green;
font-weight: bold;
}
.onionr-post-user-id {
color: gray;
}
.onionr-post-date {
color: gray;
}
.onionr-post-content {
font-family: sans-serif, serif;
border-top: 1px solid black;
font-size: 15pt;
}
.h-divider {
border-top:1px solid gray;
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.6 KiB

View File

@ -0,0 +1,45 @@
<!DOCTYPE html>
<html>
<head>
<header />
</head>
<body>
<div class="container">
<div class="row">
<div class="col-12 col-lg-3">
<div class="onionr-profile">
<div class="row">
<div class="col-4 col-lg-12">
<img id="onionr-profile-user-icon" class="onionr-profile-user-icon" src="img/default.png">
</div>
<div class="col-8 col-lg-12">
<h2 id="onionr-profile-username" class="onionr-profile-username text-left text-lg-center text-sm-left">arinerron</h2>
</div>
</div>
</div>
</div>
<div class="h-divider pb-3 d-block d-lg-none"></div>
<div class="col-sm-12 col-lg-6">
<div class="row" id="onionr-timeline-posts">
</div>
</div>
<div class="d-none d-lg-block col-lg-3">
<div class="row">
<div class="col-12">
<div class="onionr-trending">
<h2><$= LANG.TRENDING $></h2>
</div>
</div>
</div>
</div>
</div>
</div>
<footer />
<script src="js/timeline.js"></script>
</body>
</html>

View File

@ -0,0 +1,419 @@
/* handy localstorage functions for quick usage */
function set(key, val) {
return localStorage.setItem(key, val);
}
function get(key, df) { // df is default
value = localStorage.getItem(key);
if(value == null)
value = df;
return value;
}
function remove(key) {
return localStorage.removeItem(key);
}
var usermap = JSON.parse(get('usermap', '{}'));
function getUserMap() {
return usermap;
}
function deserializeUser(id) {
var serialized = getUserMap()[id]
var user = new User();
user.setName(serialized['name']);
user.setID(serialized['id']);
user.setIcon(serialized['icon']);
}
/* returns a relative date format, e.g. "5 minutes" */
function timeSince(date, size) {
// taken from https://stackoverflow.com/a/3177838/3678023
var seconds = Math.floor((new Date() - date) / 1000);
var interval = Math.floor(seconds / 31536000);
if (size === null)
size = 'desktop';
var dates = {
'mobile' : {
'yr' : 'yrs',
'mo' : 'mo',
'd' : 'd',
'hr' : 'h',
'min' : 'm',
'secs' : 's',
'sec' : 's',
},
'desktop' : {
'yr' : ' years',
'mo' : ' months',
'd' : ' days',
'hr' : ' hours',
'min' : ' minutes',
'secs' : ' seconds',
'sec' : ' second',
},
};
if (interval > 1)
return interval + dates[size]['yr'];
interval = Math.floor(seconds / 2592000);
if (interval > 1)
return interval + dates[size]['mo'];
interval = Math.floor(seconds / 86400);
if (interval > 1)
return interval + dates[size]['d'];
interval = Math.floor(seconds / 3600);
if (interval > 1)
return interval + dates[size]['hr'];
interval = Math.floor(seconds / 60);
if (interval > 1)
return interval + dates[size]['min'];
if(Math.floor(seconds) !== 1)
return Math.floor(seconds) + dates[size]['secs'];
return '1' + dates[size]['sec'];
}
/* replace all instances of string */
String.prototype.replaceAll = function(search, replacement) {
// taken from https://stackoverflow.com/a/17606289/3678023
var target = this;
return target.split(search).join(replacement);
};
/* useful functions to sanitize data */
class Sanitize {
/* sanitizes HTML in a string */
static html(html) {
return String(html).replace(/&/g, '&amp;').replace(/</g, '&lt;').replace(/>/g, '&gt;').replace(/"/g, '&quot;');
}
/* URL encodes a string */
static url(url) {
return encodeURIComponent(url);
}
}
/* config stuff */
function getWebPassword() {
return get("web-password", null);
}
function setWebPassword(password) {
return set("web-password", password);
}
function getTimingToken() {
return get("timing-token", null);
}
function setTimingToken(token) {
return set("timing-token", token);
}
/* user class */
class User {
constructor() {
this.name = 'Unknown';
this.id = 'unknown';
this.image = 'img/default.png';
}
setName(name) {
this.name = name;
}
getName() {
return this.name;
}
setID(id) {
this.id = id;
}
getID() {
return this.id;
}
setIcon(image) {
this.image = image;
}
getIcon() {
return this.image;
}
serialize() {
return {
'name' : this.getName(),
'id' : this.getID(),
'icon' : this.getIcon()
};
}
remember() {
usermap[this.getID()] = this.serialize();
set('usermap', JSON.stringify(usermap));
}
}
/* post class */
class Post {
/* returns the html content of a post */
getHTML() {
var postTemplate = '<$= jsTemplate('onionr-timeline-post') $>';
var device = (jQuery(document).width() < 768 ? 'mobile' : 'desktop');
postTemplate = postTemplate.replaceAll('$user-name-url', Sanitize.html(Sanitize.url(this.getUser().getName())));
postTemplate = postTemplate.replaceAll('$user-name', Sanitize.html(this.getUser().getName()));
postTemplate = postTemplate.replaceAll('$user-id-url', Sanitize.html(Sanitize.url(this.getUser().getID())));
postTemplate = postTemplate.replaceAll('$user-id-truncated', Sanitize.html(this.getUser().getID().substring(0, 12) + '...'));
// postTemplate = postTemplate.replaceAll('$user-id-truncated', Sanitize.html(this.getUser().getID().split('-').slice(0, 4).join('-')));
postTemplate = postTemplate.replaceAll('$user-id', Sanitize.html(this.getUser().getID()));
postTemplate = postTemplate.replaceAll('$user-image', Sanitize.html(this.getUser().getIcon()));
postTemplate = postTemplate.replaceAll('$content', Sanitize.html(this.getContent()));
postTemplate = postTemplate.replaceAll('$date-relative', timeSince(this.getPostDate(), device) + (device === 'desktop' ? ' ago' : ''));
postTemplate = postTemplate.replaceAll('$date', this.getPostDate().toLocaleString());
return postTemplate;
}
setUser(user) {
this.user = user;
}
getUser() {
return this.user;
}
setContent(content) {
this.content = content;
}
getContent() {
return this.content;
}
setPostDate(date) { // unix timestamp input
if(date instanceof Date)
this.date = date;
else
this.date = new Date(date * 1000);
}
getPostDate() {
return this.date;
}
}
/* block class */
class Block {
constructor(type, content) {
this.type = type;
this.content = content;
}
// returns the block hash, if any
getHash() {
return this.hash;
}
// returns the block type
getType() {
return this.type;
}
// returns the block header
getHeader(key, df) { // df is default
if(key !== undefined) {
if(this.getHeader().hasOwnProperty(key))
return this.getHeader()[key];
else
return (df === undefined ? null : df);
} else
return this.header;
}
// returns the block metadata
getMetadata(key, df) { // df is default
if(key !== undefined) {
if(this.getMetadata().hasOwnProperty(key))
return this.getMetadata()[key];
else
return (df === undefined ? null : df);
} else
return this.metadata;
}
// returns the block content
getContent() {
return this.content;
}
// returns the parent block's hash (not Block object, for performance)
getParent() {
if(!(this.parent instanceof Block) && this.parent !== undefined && this.parent !== null)
this.parent = Block.openBlock(this.parent); // convert hash to Block object
return this.parent;
}
// returns the date that the block was received
getDate() {
return this.date;
}
// returns a boolean that indicates whether or not the block is valid
isValid() {
return this.valid;
}
// returns a boolean thati ndicates whether or not the block is signed
isSigned() {
return this.signed;
}
// returns the block signature
getSignature() {
return this.signature;
}
// returns the block type
setType(type) {
this.type = type;
return this;
}
// sets block metadata by key
setMetadata(key, val) {
this.metadata[key] = val;
return this;
}
// sets block content
setContent(content) {
this.content = content;
return this;
}
// sets the block parent by hash or Block object
setParent(parent) {
this.parent = parent;
return this;
}
// indicates if the Block exists or not
exists() {
return !(this.hash === null || this.hash === undefined);
}
/* static functions */
// recreates a block by hash
static openBlock(hash) {
return parseBlock(response);
}
// converts an associative array to a Block
static parseBlock(val) {
var block = new Block();
block.type = val['type'];
block.content = val['content'];
block.header = val['header'];
block.metadata = val['metadata'];
block.date = new Date(val['date'] * 1000);
block.hash = val['hash'];
block.signature = val['signature'];
block.signed = val['signed'];
block.valid = val['valid'];
block.parent = val['parent'];
if(block.getParent() !== null) {
// if the block data is already in the associative array
/*
if (blocks.hasOwnProperty(block.getParent()))
block.setParent(Block.parseAssociativeArray({blocks[block.getParent()]})[0]);
*/
}
return block;
}
// converts an array of associative arrays to an array of Blocks
static parseBlockArray(blocks) {
var outputBlocks = [];
for(var key in blocks) {
if(blocks.hasOwnProperty(key)) {
var val = blocks[key];
var block = Block.parseBlock(val);
outputBlocks.push(block);
}
}
return outputBlocks;
}
static getBlocks(args, callback) { // callback is optional
args = args || {}
var url = '/client/?action=searchBlocks&data=' + Sanitize.url(JSON.stringify(args)) + '&token=' + Sanitize.url(getWebPassword()) + '&timingToken=' + Sanitize.url(getTimingToken());
console.log(url);
var http = new XMLHttpRequest();
if(callback !== undefined) {
// async
http.addEventListener('load', function() {
callback(Block.parseBlockArray(JSON.parse(http.responseText)['blocks']));
}, false);
http.open('GET', url, true);
http.timeout = 5000;
http.send(null);
} else {
// sync
http.open('GET', url, false);
http.send(null);
return Block.parseBlockArray(JSON.parse(http.responseText)['blocks']);
}
}
}
/* temporary code */
if(getWebPassword() === null) {
var password = "";
while(password.length != 64) {
password = prompt("Please enter the web password (run `./RUN-LINUX.sh --get-password`)");
}
setTimingToken(prompt("Please enter the timing token (optional)"));
setWebPassword(password);
window.location.reload(true);
}

View File

@ -0,0 +1,28 @@
/* just for testing rn */
Block.getBlocks({'type' : 'onionr-post', 'signed' : true, 'reverse' : true}, function(data) {
for(var i = 0; i < data.length; i++) {
try {
var block = data[i];
var post = new Post();
var user = new User();
var blockContent = JSON.parse(block.getContent());
user.setName('unknown');
user.setID(new String(block.getHeader('signer', 'unknown')));
post.setContent(blockContent['content']);
post.setPostDate(block.getDate());
post.setUser(user);
document.getElementById('onionr-timeline-posts').innerHTML += post.getHTML();
} catch(e) {
console.log(e);
}
}
});
function viewProfile(id, name) {
document.getElementById("onionr-profile-username").innerHTML = Sanitize.html(decodeURIComponent(name));
}

View File

@ -14,7 +14,7 @@
You should have received a copy of the GNU General Public License You should have received a copy of the GNU General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>. along with this program. If not, see <https://www.gnu.org/licenses/>.
''' '''
import unittest, sys, os, base64, tarfile, shutil, simplecrypt, logger #, btc import unittest, sys, os, base64, tarfile, shutil, simplecrypt, logger
class OnionrTests(unittest.TestCase): class OnionrTests(unittest.TestCase):
def testPython3(self): def testPython3(self):
@ -116,7 +116,7 @@ class OnionrTests(unittest.TestCase):
self.assertTrue(False) self.assertTrue(False)
self.assertTrue(True) self.assertTrue(True)
'''
def testBlockAPI(self): def testBlockAPI(self):
logger.debug('-'*26 + '\n') logger.debug('-'*26 + '\n')
logger.info('Running BlockAPI test #1...') logger.info('Running BlockAPI test #1...')
@ -154,15 +154,6 @@ class OnionrTests(unittest.TestCase):
self.assertTrue(False) self.assertTrue(False)
self.assertTrue(True) self.assertTrue(True)
def testBitcoinNode(self):
# temporarily disabled- this takes a lot of time the CI doesn't have
self.assertTrue(True)
#logger.debug('-'*26 + '\n')
#logger.info('Running bitcoin node test...')
#sbitcoin = btc.OnionrBTC()
def testPluginReload(self): def testPluginReload(self):
logger.debug('-'*26 + '\n') logger.debug('-'*26 + '\n')
logger.info('Running simple plugin reload test...') logger.info('Running simple plugin reload test...')
@ -224,7 +215,7 @@ class OnionrTests(unittest.TestCase):
logger.debug('thread finished.', timestamp = False) logger.debug('thread finished.', timestamp = False)
self.assertTrue(True) self.assertTrue(True)
'''
def testQueue(self): def testQueue(self):
logger.debug('-'*26 + '\n') logger.debug('-'*26 + '\n')
logger.info('Running daemon queue test...') logger.info('Running daemon queue test...')
@ -261,7 +252,6 @@ class OnionrTests(unittest.TestCase):
def testAddAdder(self): def testAddAdder(self):
logger.debug('-'*26 + '\n') logger.debug('-'*26 + '\n')
logger.info('Running address add+remove test') logger.info('Running address add+remove test')
import core import core
myCore = core.Core() myCore = core.Core()
if not os.path.exists('data/address.db'): if not os.path.exists('data/address.db'):
@ -273,5 +263,11 @@ class OnionrTests(unittest.TestCase):
self.assertTrue(False) self.assertTrue(False)
else: else:
self.assertTrue(False) # <- annoying :( self.assertTrue(False) # <- annoying :(
def testCrypto(self):
logger.info('running cryptotests')
if os.system('python3 cryptotests.py') == 0:
self.assertTrue(True)
else:
self.assertTrue(False)
unittest.main() unittest.main()

View File

@ -1,33 +1,43 @@
![Onionr logo](./docs/onionr-logo.png) ![Onionr logo](./docs/onionr-logo.png)
[![Build Status](https://travis-ci.org/beardog108/onionr.svg?branch=master)](https://travis-ci.org/beardog108/onionr)
[![Open Source Love](https://badges.frapsoft.com/os/v3/open-source.png?v=103)](https://github.com/ellerbrock/open-source-badges/) [![Open Source Love](https://badges.frapsoft.com/os/v3/open-source.png?v=103)](https://github.com/ellerbrock/open-source-badges/)
Anonymous P2P platform, using Tor & I2P. Anonymous P2P platform, using Tor & I2P.
Major work in progress. ***Experimental, not safe or easy to use yet***
***THIS SOFTWARE IS NOT USABLE OR SECURE YET.*** <hr>
**The main repo for this software is at https://gitlab.com/beardog/Onionr/**
**Roadmap/features:** # Summary
Check the [GitHub Project](https://github.com/beardog108/onionr/projects/1) to see progress towards the alpha release. Onionr is a decentralized, peer-to-peer data storage network, designed to be anonymous and resistant to (meta)data analysis and spam.
Onionr can be used for mail, as a social network, instant messenger, file sharing software, or for encrypted group discussion.
# Roadmap/features
Check the [Gitlab Project](https://gitlab.com/beardog/Onionr/milestones/1) to see progress towards the alpha release.
## Core internal features
* [X] Fully p2p/decentralized, no trackers or other single points of failure * [X] Fully p2p/decentralized, no trackers or other single points of failure
* [X] High level of anonymity * [X] End to end encryption of user data
* [ ] End to end encryption where applicable
* [X] Optional non-encrypted blocks, useful for blog posts or public file sharing * [X] Optional non-encrypted blocks, useful for blog posts or public file sharing
* [ ] Easy API system for integration to websites * [X] Easy API system for integration to websites
* [ ] Metadata analysis resistance (being improved)
# Development
This software is in heavy development. If for some reason you want to get involved, get in touch first. ## Other features
**Onionr API and functionality is subject to non-backwards compatible change during development** **Onionr API and functionality is subject to non-backwards compatible change during pre-alpha development**
# Donate ## Help out
Everyone is welcome to help out. Please get in touch first if you are making non-trivial changes. If you can't help with programming, you can write documentation or guides.
Bitcoin/Bitcoin Cash: 1onion55FXzm6h8KQw3zFw2igpHcV7LPq Bitcoin/Bitcoin Cash: 1onion55FXzm6h8KQw3zFw2igpHcV7LPq
@ -36,5 +46,3 @@ Bitcoin/Bitcoin Cash: 1onion55FXzm6h8KQw3zFw2igpHcV7LPq
The Tor Project, I2P developers, and anyone else do not own, create, or endorse this project, and are not otherwise involved. The Tor Project, I2P developers, and anyone else do not own, create, or endorse this project, and are not otherwise involved.
The badges (besides travis-ci build) are by Maik Ellerbrock is licensed under a Creative Commons Attribution 4.0 International License. The badges (besides travis-ci build) are by Maik Ellerbrock is licensed under a Creative Commons Attribution 4.0 International License.
The onion in the Onionr logo is adapted from [this](https://commons.wikimedia.org/wiki/File:Red_Onion_on_White.JPG) image by Colin on Wikimedia under a Creative Commons Attribution-Share Alike 3.0 Unported license. The Onionr logo is under the same license.

View File

@ -1,12 +1,9 @@
urllib3==1.19.1 urllib3==1.23
gevent==1.2.2 requests==2.18.4
PyNaCl==1.2.1 PyNaCl==1.2.1
pycoin==0.62 gevent==1.2.2
Flask==1.0
sha3==0.2.1 sha3==0.2.1
simple_crypt==4.1.7
ecdsa==0.13
requests==2.12.4
defusedxml==0.5.0 defusedxml==0.5.0
SocksiPy_branch==1.01 simple_crypt==4.1.7
sphinx_rtd_theme==0.3.0 Flask==1.0.2
PySocks==1.6.8