Change python2 library pybloomfilter to python3 - python

I want to use pybloomfilter package: https://github.com/axiak/pybloomfiltermmap,
I managed to install it under python3, but it seems the saved file does not have the original information (under python2, this won't happen), I looked into source code and it seems there is nothing specific to python2, so I am totally lost on how to make this library compatible with python3.
EDIT1:
by "not have the original information", I mean, when I add some string into the filter, then I end the program, the next time, I use open to load the filter, the filter is clear, it does not remember the added strings.

pybloomfiltermmap3 is a Python 3 fork of pybloomfiltermmap by Michael Axiak (#axiak).
class pybloomfilter.BloomFilter(capacity : int, error_rate : float[,
filename=None : string ][, perm=0755 ])
Install:
Please have Cython installed. Please note that this version is for Python 3. In case you are using Python 2, please see
https://github.com/axiak/pybloomfiltermmap.
To install:
$ pip install cython
$ pip install pybloomfiltermmap3
to build and install the module.
And there is a instance method to sync the file:
BloomFilter.sync()
Forces a sync() call on the underlying mmap file object.
Use this if you are about to copy the file and you want to be Sure (TM)
you got everything correctly.

Related

Programmatically check if wheel is compatible with Python installation

Is it possible to programmatically check if a wheel (whl) is compatible with the chosen Python installation before attempting to install?
I'm making an automated packages installer (packages needed for my Python project to work), and I need to only attempt to install compatible pkgs, so if there are errors, I know they are only from the compatible modules and I should see what happened (not errors also from incompatible pkgs, which I wouldn't care). Example: I'd have wheels for Python 3.5 and 3.7, and in a 3.5 installation, 3.7 wheels could not be tried to be installed.
I've tried pkginfo (https://pypi.org/project/pkginfo/), but on wheel.supported_platforms, it returns an empty array and I can't do anything with that (a wheel with "any" or with "win32" on their name in the platform part, returned an empty array, so I can't use that, it seems).
Also tried the output from python -m pip debug --verbose, but the following appears:
WARNING: This command is only meant for debugging. Do not use this with automation for parsing and getting these details, since the output and options of this command may change without no
tice.
This makes the command not possible to use, even though bellow that it prints the "Compatible tags", which more or less I could use to determine if a wheel is supported or not from its name. Example of those "Compatible tags" in a Python array:
['cp39-cp39-win_amd64', 'cp39-abi3-win_amd64', 'cp39-none-win_amd64', 'cp38-abi3-win_amd64', 'cp37-abi3-win_amd64', 'cp36-abi3-win_amd64', 'cp35-abi3-win_amd64', 'cp34-abi3-win_amd64', 'cp
33-abi3-win_amd64', 'cp32-abi3-win_amd64', 'py39-none-win_amd64', 'py3-none-win_amd64', 'py38-none-win_amd64', 'py37-none-win_amd64', 'py36-none-win_amd64', 'py35-none-win_amd64', 'py34-no
ne-win_amd64', 'py33-none-win_amd64', 'py32-none-win_amd64', 'py31-none-win_amd64', 'py30-none-win_amd64', 'cp39-none-any', 'py39-none-any', 'py3-none-any', 'py38-none-any', 'py37-none-any
', 'py36-none-any', 'py35-none-any', 'py34-none-any', 'py33-none-any', 'py32-none-any', 'py31-none-any', 'py30-none-any']
With, for example, "pyHook-1.5.1-cp36-cp36m-win32.whl", I could check the name and see if it's compatible or not (except because of the warning above...).
Any other ideas?
Thanks in advance for any help!
EDIT: I could go manually and pull things from the name and hard-code the some possibilities I see on documentation, like "win32" and "win_amd64" (as I did before), but then I'd need to know exactly all the possibilities that the parts of the name can have (I saw a cool expression on the documentation: "e.g." - which means there are more than the mentioned things) and have a lot of work on that. I was hoping there was already someone that had made such thing (maybe even Python itself has some way in any of its internal packages).
You can do this using packaging.
pip install packaging
An example code to get the tags similar to how you got from pip would be:
from packaging.tags import sys_tags
tags = sys_tags()
print([str(tag) for tag in tags])
# ['cp39-cp39-manylinux_2_33_x86_64', 'cp39-cp39-manylinux_2_32_x86_64', 'cp39-cp39-manylinux_2_31_x86_64', ..... , 'py31-none-any', 'py30-none-any']
Of course, you can do much more things programmatically with the above variable tags:
>>> tags = sys_tags()
>>> for tag in list(tags)[:3]:
... print(tag.interpreter, tag.abi, tag.platform)
...
cp39 cp39 manylinux_2_33_x86_64
cp39 cp39 manylinux_2_32_x86_64
cp39 cp39 manylinux_2_31_x86_64
For more in-depth documentation, check: https://packaging.pypa.io/en/latest/tags.html#packaging.tags.sys_tags

Python: Incorrect Intellisense autocompletion for numpy

One thing I can't get over - when I use numpy in Visual Studio and I want to declare an array of zeroes, I write:
x = numpy.zeros(n)
and it is correct for the interpreter. BUT THE AUTOCOMPLETION GIVES ME:
X = numpy.zeros_like ...
How can I change it to get actually helpful autocompletion? In C++ I get everything allright, so I guess it's an internal problem in Python case.
Edit: As I see the problem is that numpy.zeros is defined in numeric.py as:
zeros = multiarray.zeros. Apparently this is not enough for IntelliSense (or VisualAssist for this matter), which requires def function to actually see the structure.
You need to install the python 3.5 and download the corresponding wheel for numpy. Then using the command: pip install xxxx(numpy wheel version that you download) to install it. For more the detail information about the installation staff, you can have a look at this.
Then open or create a python application project in VS and set the python 3.5 as the default environment, then I can found the intellisense for numpy.zeros also works fine in .py file like the following screenshot: (python 3.5)
If set the python 2.7 as the default environment, the intellisense just like your description as below:

How to know which .whl module is suitable for my system with so many?

We have so may versions of wheel.
How could we know which version should be installed into my system?
I remember there is a certain command which could check my system environment.
Or is there any other ways?
---------------------Example Below this line -----------
scikit_learn-0.17.1-cp27-cp27m-win32.whl
scikit_learn-0.17.1-cp27-cp27m-win_amd64.whl
scikit_learn-0.17.1-cp34-cp34m-win32.whl
scikit_learn-0.17.1-cp34-cp34m-win_amd64.whl
scikit_learn-0.17.1-cp35-cp35m-win32.whl
scikit_learn-0.17.1-cp35-cp35m-win_amd64.whl
scikit_learn-0.18rc2-cp27-cp27m-win32.whl
scikit_learn-0.18rc2-cp27-cp27m-win_amd64.whl
scikit_learn-0.18rc2-cp34-cp34m-win32.whl
scikit_learn-0.18rc2-cp34-cp34m-win_amd64.whl
scikit_learn-0.18rc2-cp35-cp35m-win32.whl
scikit_learn-0.18rc2-cp35-cp35m-win_amd64.whl
In case this is still an issue, the following should tell you the information you need to know about your architecture to choose a wheel:
import platform
print platform.architecture()
You don't have to know. Use pip - it will select the most specific wheel available.
As a warning, pip._internal isn't a stable API, so you wouldn't want to rely on it. But in case it's helpful (as it was to me) - this answer gives a way of solving the problem:
You can get it in python from pip following this solution:
Since pip version 19.3,
TargetPython.get_tags() returns
the supported PEP 425 tags to check wheel candidates against (source). The tags are returned in order of preference (most preferred first).
from pip._internal.models.target_python import TargetPython
target_python = TargetPython()
pep425tags = target_python.get_tags()
The class TargetPython encapsulates the properties of a Python interpreter one is targeting for a package install, download, etc.
To avoid using pip._internal, you can use, in the shell (see here):
$ path/to/pythonX.Y -m pip debug --verbose

Bad Marshal error -- runsnake

I ran cProfile on a python 3 script, worked nicely, then tried to visualize it using runsnake. Howvever, I got an empty screen and the error 'bad marshal data'.
I removed .pyc file but that did not work either.
The code I used to install runsnake was:
sudo apt-get install python-profiler python-wxgtk2.8 python-setuptoolD
sudo easy-install installSquareMap RunSnakeRun
I am using UBUNTU.
Many thanks.
note: I should add I installed everything while py3k was activated
TL;DR: This error occurs when profiling in Python 2.x and viewing the profile in Python 3.x or vice versa.
I had the same problem. As far as I can tell, the RunSnakeRun package has not been ported to Python3. At least, I could pip it to python2 but not to python3 (SyntaxError). Further, I think the output format of cProfile is not compatible between python 2/3. I did not take the time to find an definitive confirmation of this, but in the doc of cProfile class pstats.Stats(*filenames, stream=sys.stdout), they do say "The file selected by the above constructor must have been created by the corresponding version of profile or cProfile. To be specific, there is no file compatibility guaranteed with future versions of this profiler, and there is no compatibility with files produced by other profilers.". This seems to be the origin of your problem. For e.g., I made a profile output from python3
import cProfile
cProfile.run('some code to profile', 'restats')
and tried to open it in RunSnakeRun and got the same marhsal error you got. Further, if I do
import pstats
p = pstats.Stats('restats')
p.strip_dirs().sort_stats(-1).print_stats()
in python3, it works like a charm. If I do it in python2, it gives the marshal error. Now, RunSnakeRun is executed in python2 (unless you found some way to make it run in python3). So, my guess is that you have performed your profiling in python3 and are using tools relying on python2 to analyze them, which tools are expecting the output to be compatible with python2.
The RunSnakeRun project seems to be inactive for a while now (copyright on the home page is 2005-2011) and there is no indication that it will be ported to python3.... Maybe considering alternative visualization tool might be the best way to go for you if you want to develop in Python3. pyprof2calltree in combination with KCachegrind worked fine for me in Linux. It can provide a similar visual view of the profiling output as you would get from RunSnakeRun.
Also ran into the same problem and I think there's no (good) way to use runsnake for Python3 (as it already was mentioned in the previous answer). However, SnakeViz might help. It's a relatively intuitive graphical overview of profiling data that, like runsnake, builds on top of profile outputs. Nice bonus: works also for Jupyter notebooks!

How to bundle Python dependancies in IronWorker?

I'm writing a simple IronWorker in Python to do some work with the AWS API.
To do so I want to use the boto library which is distributed via PyPi repository. The boto library is not installed by default in the IronWorker runtime environment.
How can I bundle the boto library dependancy with my IronWorker code?
Ideally I'm hoping I can use something like the gem dependancy bundling available for Ruby IronWorkers - i.e in myRuby.worker specify
gemfile '../Gemfile', 'common', 'worker' # merges gems from common and worker groups
In the Python Loggly sample, I see that the hoover library is used:
#here we have to include hoover library with worker.
hoover_dir = os.path.dirname(hoover.__file__)
shutil.copytree(hoover_dir, worker_dir + '/loggly') #copy it to worker directory
However, I can't see where/how you specify which hoover library version you want, or where to download it from.
What is the official/correct way to use 3rd party libraries in Python IronWorkers?
Newer iron_worker version has native support of pip command.
So, you need:
runtime "python"
exec "something.py"
pip "boto"
pip "someotherpip"
full_remote_build true
[edit]We've worked on our toolset a bit since this answer was written and accepted. The answer from my colleague below is the recommended course moving forward.[/edit]
I wrote the Python client library for IronWorker. I'm also employed by Iron.io.
If you're using the Python client library, the easiest (and recommended) way to do this is to just copy over the library's installed folder, and include it when uploading the package. That's what the Python Loggly sample is doing above. As you said, that doesn't specify a version or where to download the library from, because it doesn't care. It just takes the one installed on your system and uses it. Whatever you get when you enter "import boto" on your local machine is what would be uploaded.
The other option is using our CLI to upload your worker, with a .worker file.
To do this, here's what you'd need to do:
Create a botoworker.worker file:
runtime "binary"
build 'pip install --install-option="--prefix=`pwd`/pips" boto'
file 'botoworker.py'
exec "botoworker.sh"
That second line is the pip command that will be run to install the dependency. You can modify it like you would any pip command run from the command line. It's going to execute that command on the worker during the "build" phase, so it's only executed once instead of every time you run a task.
The third line should be changed to the Python file you want to run--it's your Python worker file. Here's the one we used to test this:
import boto
If you save that as botoworker.py, the above should work without any modification. :)
The fourth line is a shell script that's going to actually run your worker. I've included the one we used below. Just save it as botoworker.sh, and you won't have to worry about modifying the .worker file above.
PYTHONPATH="$HOME/pips/lib/python2.7/site-packages:$PYTHONPATH" python botoworker.py "$#"
You'll notice it refers to your Python file--if you don't name your Python file botoworker.py, remember to change it here, too. All this does is set your PYTHONPATH to include the installed library, and then runs your Python file.
To upload this, just make sure you have the CLI installed (gem install iron_worker_ng, making sure your Ruby version is 1.9.3 or higher) and then run "iron_worker upload botoworker" in your shell, from the same directory your botoworker.worker file is in.
Hope this helps!

Categories