How to generate a Bitcoin address in Python 3.5/6? - python

I've tried pybitcoin already, but I get a ModuleError for "services". I've tried multiple scripts from Google but they are all for Python2.
I tried to use this to get started
import ecdsa
return ecdsa.SigningKey.generate(curve=ecdsa.SECP256k1).to_string()
but it returns bytes not a string. I assume it's a version issue because it's GitHub only mentions 3.1 and 3.2.
I want the addresses to be completely random.

Related

Pytube get_by_resolution method

I've tried to test pytube library and whenever I am trying to get a video by method
get_by_resolution()
there always comes an error with a stream instance. Debugging stop at line 8 points to me that it didn't created a SteamQuery instance, instead I got NonType. So i made an alternative with other methods (getbylowest, get by highest) on same link. No problem with downloading the file.
Source code:
https://github.com/Cybernetic-Ransomware/proving_ground/blob/master/TiffinTech/yt_downloader.py
requirements:
Python 3.11.0
pytube==12.1.2
I've checked all resolution parameters from method docstring (e.g. "720p") and others (e.g. "720p60"). There is no problem with link, because function "to-download-optional" works well. Could it be a problem with current library version?

How do I get the values of memory adresses used by apps outside of python (on MacOS)

I'm trying to use python to read the value of a memory address that another program is using and I haven't been able to figure out how to. (I'm using macOS)
I've tried many ways but I think the closest I've gotten is this because the website actually shows it working for them, but when i do it in python it just crashes.
import ctypes
x = "0x7FA09AAE85A0"
a = ctypes.cast(x, ctypes.py_object).value
print(a)

GPG decryption in AWS Lambda error code 2

I'm trying to create an AWS Lambda (in python, although my problems are probably not python-related) that will, among other things, decrypt a PGP file stored in S3.
I have a script that runs fine locally (on an ubuntu machine). I have adapted the relevant parts of that script into the lambda script. I'm using python-gnupg, and have created a layer to get to that functionality.
I created a CentOS VM on that ubuntu machine, and put gpg on that.
I have a deployment zip that I think is correct (contents are the script, bin/gpg, lib/{libgpg-error.so.0,libreadline.so.7,libcrypt.so.20,libassuan.so.0}; the gpg executable and libraries are all from that CentOS VM). If I, for instance, remove libassuan from that, I do get an error about that being a missing dependency, hence my believing that the zip is correctly created.
When I deploy the lambda, the code shows up correctly and seems to run (I did have to set it to use the python-gnupg layer, of course).
This is still in basic testing, so the file I'm trying to decrypt is the same one I used on the ubuntu box test, and is being retrieved from S3. The decryption key and passphrase are being retrieved from AWS Parameter Store and are, as near as I can tell, being retrieved correctly (the latter is definitely correct; the former is the correct length with the correct start and correct end). And I do not get an error adding the key (not sure if I would, I guess).
So, everything looks right, coming in. Getting to the decryption itself, we have:
gpg = gnupg.GPG(gnupghome=f"{targetDir}/..", gpgbinary="./bin/gpg")
key_data = open(keyFileName, 'rb').read()
priv_key = gpg.import_keys(key_data)
decrData = gpg.decrypt_file(contents, passphrase=pgpPassphrase, always_trust=True, extra_args=[ '--yes' ])
if not decrData.ok:
logger.error (f"decryption failed: {decrData.status}")
As I implied earlier, this fails with error code 2, and the printed status message is 'decryption failed'. Totally unhelpful.
Unsurprisingly, the decrData object has zero data.
FWIW, always_true and extra_args, as shown, did not change anything (nor did passing both as extra_args=[ '--yes', '--always-trust' ]). I was getting exactly the same results before adding those.
So, all that being said, the question is, does anyone have any suggestions on something I might have done wrong, or what else I can check to see why I'm getting this error?
Thanks.
Update:
Ok, I made a mistake here. I did not have this working, locally; I had a different version (using PGPy) working locally.
Testing locally yesterday, I figured out that my problem was that the key wasn't being imported successfully. The root of that seems to've been that the key was in the wrong format (binary, sent in as uuencoded in the Parameter Store). So I re-exported the key, adding the argument '--armor' to the 'gpg --export-secret-keys' command, then added 'passphrase=...' to the gpg.import_keys() call, and that worked locally (at least to get the key imported; I actually haven't checked the gpg.decrypt() or gpg.decrypt_file() command.
However, taking that exported key and putting it into the Parameter Store... It looks like I am getting it back from the parameter store correctly (I've checked the beginning, the ending, and spots in the middle - including where I joined the two parameters from the Parameter Store), but the key is still not being imported correctly when I run my lambda. FWIW, I did try adding "extra_args=[ '--yes', '--always-trust']" to the gpg.import_keys(), which did nothing. I also tried uploading the key file to S3, and getting it back from there, which also did nothing.
Again, I welcome any suggestions for other things to try.
Thanks again.
Update 2: I also tried supplying the key file (ascii-armored) as part of the distribution zip. That kind-of worked (though I did have to put the key file into a subdirectory; leaving it at the top level led to a permissions problem on the main python file, somehow), in that the file was there, and seemed to be read correctly. However, the key still failed to import.
I seem to be stuck where I was with the PGPy solution, where the code worked perfectly, when run locally, but not when I ran it in AWS.
Update 3: Finally did run my local test to completion and, as expected, the file decrypted perfectly. Wish I knew why I can't import the key on AWS.
Several things to add here.
Part of why I had gotten confused about testing locally was that I had gotten it running under the Python 3.7 runtime (which has GPG as part of the VM). So that's a partial answer.
I was trying to get it running under 3.9, though, to be current. I had tried to solve this via a Docker image. I created the image, but, for terrible reasons, was unable to deploy it. So that's why I was trying via a deployment zip.
As I mentioned, I was using a version of GPG (and attendant libraries) that was pulled off a CentOS VM. This did get closer than my previous attempt, which had used GPG/libraries from Ubuntu. That had fallen apart on libc version.
What finally occurred to me today, though, was that maybe I could pull apart my Docker image, and grab GPG/libraries from that. Using 'docker save --output="filename.tar" image-name', I was able to dump the files out, so I sifted through that to get the relevant files and create a new deployment zip.
The short of it is that that worked perfectly.

Python 2.7 Windows scripts not working in Unix

I have a python script that I want to run in Redhat 6.7 OS but it is constantly failing.
**Python version: 2.7.13 (initially it had default version which I have symlink to usr/local/bin/python2.7, not sure if it has changed to 2.7 but when I type which is python in terminal it shows the location usr/local/bin/python.)
Script to be run on: OS = Redhat 6.7
Script written in: OS = Windows10 (python ver 2.7.11)
code:
import urllib
import json
url = 'https://username:pass#api.amsterdampc.com'# sample URL(tested on 'api.openweathermap.org/data/2.5/weather?q=London' too gives the same error)
data = json.load(urllib.urlopen(url)) #should return a json data
print data
Here print data is raising "json decoder error", when i looked back into the steps I found out urllib.urlopen(url) is not at all returning the required json data instead of some ml response/empty at times.
Is there any specific changes I need to do if I run a python script in different OS isn't python a platform independent language?
By and large, python is reasonably platform independant. But that doesn't mean that there aren't differences between platforms. If you look through the documentation for the standard library, you will find notes that some functions or classes are only available on certain platforms. And e.g. the way multiprocessing works is also different between UNIX-like operating systems and ms-windows.
In this case you mention that the trouble begins with the fact that urllib.urlopen doesn't return what you expect. This is probably not an issue with the Python code. I suspect it is a networking/routing/firewall issue. You would have to show the returned not-JSON data to be sure.
As an aside, if you want to use HTML in Python, do yourself a favour and use the requests module. It is a lot more user-friendly then urllib.
Edit 1:
It says:
Your request could not be processed. Request could not be handled
This could be caused by a misconfiguration, or possibly a malformed request.
So there are two possible causes:
misconfiguration
malformed request
The network object returned by urllib.urlopen() has some extra methods compared to files, like info() and getcode(). Using those might yield some extra information about why the request failed.
If you do a POST request, the information has to be formatted and encoded in a certain way. If you use requests.post, it will handle these details for you.

What could cause a UnicodeEncodeError exception to creep into a working Python environment?

I have a method in my script that pulls a Twitter RSS feed, parses it with FeedPharser, wraps it in TwiML (Twilio-flavored XML) using the twilio module, and returns the resulting response in a CherryPy method via str(). This works my fine in development environment (Kubuntu 10.10); I have had mixed results on my server (Ubuntu Server 10.10 on Linode).
For the first few months, all was well. Then, the method described above began to fail with something like:
UnicodeEncodeError: 'ascii' codec
can't encode character u'\u2019' in
position 259: ordinal not in
range(128)
But, when I run the exact same code on the same feed, with the same python version, on the the same OS, on my development box, the code executes fine. However, I should note that even when it does work properly, some characters aren't outputted right. For example:
’
rather than
'
To solve this anomaly, I simply rebuilt my VPS from scratch, which worked for a few more months, and then the error came back.
The server automatically installs updated Ubuntu packages, but so does my development box. I can't think of anything that could cause this. Any help is appreciated.
XML data cannot contain certain characters. An easy workaround is to wrap the data inside your XML tag that is giving you the error with CDATA. For example:
<xmltag><![CDATA[Your content]]></xmltag>
Or you can use numerical reference values ex & for &
More information on this is available here:
http://en.wikipedia.org/wiki/XML#Characters_and_escaping
http://en.wikipedia.org/wiki/Numeric_character_reference
http://en.wikipedia.org/wiki/List_of_XML_and_HTML_character_entity_references
http://en.wikipedia.org/wiki/CDATA
A few reboots later (for unrelated reasons) and it's working again. How odd....

Categories