I've been looking around here but I haven't finded what I was searching, so I hope it's not answer around here. If it is, I would delete my question.
I was wondering if Sublime Text can suggest you functions from a module when you write "module.function". For example, if I write "import PyQt4", then sublime Text suggests me "PyQt4.QtCore" when I write "PyQt4.Q".
For now, I'll installed "SublimeCodeIntel" and just does it but for only some modules (like math or urllib). It's possible to configure it for any module? Or you can recommend me any other plugin?
Thanks for reading!
PD: also, could be possible to configute it also for my own modules? I mean, for example, module that I have written and are in the same folder as the current file I'm editing.
SublimeCodeIntel will work for any module, as long as it's indexed. After you first install the plugin, indexing can take a while, depending on the number and size of third-party modules you have in site-packages. If you're on Linux and have multiple site-packages locations, make sure you define them all in the settings. I'd also recommend changing "codeintel_max_recursive_dir_depth" to 25, especially if you're on OS X, as the default value of 10 may not reach all the way into deep directory trees.
Make sure you read through all the settings, and modify them to suit your needs. The README also contains some valuable information for troubleshooting, so if the indexing still isn't working after a while, and after restarting Sublime a few times, you may want to delete the database and start off fresh.
Related
I've seen a few questions asking this, but none of the solutions worked for me.
I am developing a few functions/classes in different modules and have a main.py script that calls everything.
The problem is, when I make a change to a function in another module i.e. module1.py, VSCode does not detect the changes when I call the function in main.py after updating, it's still the older version.
I can get around this by doing something like:
from importlib import reload
reload module1
but this gets old real quick especially when I'm importing specific functions or classes from a module.
Simply re-running the imports at the top of my main.py doesn't actually do anything, I can only do that if I kill the shell and reopen it from the begining, which is not ideal if I am incrementally developing something.
I've read on a few questions that I could include this:
"files.useExperimentalFileWatcher" : true
into my settings.json, but it does not seem to be a known configuration setting in my version, 1.45.1.
This is something Spyder handles by default, and makes it very easy to code incrementally when calling functions and classes from multiple modules in the pkg you are developing.
How can I achieve this in VSCode? To be clear, I don't want to use IPython autoreload magic command.
Much appreciated
FYI here are the other questions I saw, but did not get a working solution out of, amongst others with similar questions/answers :
link1
link2
There is no support for this in VS Code as Python's reload mechanism is not reliable enough to use outside of the REPL, and even then you should be careful. It isn't a perfect solution and can lead to stale code lying about which can easily trip you up (and I know this because I wrote importlib.reload() š).
Im working on a project that imports several packages and when the script runs, I load a neural net model.
I want to know if the following is achievable:
If i run the script in another python environment, i need to install all the packages im importing. Is it possible to avoid this? This will remove the need to install all the packages the first time.
Is it possible to embed the neuralnet .pb into the code? Keep in mind that it weighs 80mb, so an hex dump doesnt work (text file with the dump weighs 700 mb)
The idea is to have 1 .py with everything necessary within. Is it possible?
Thank you!
If i run the script in another python environment, i need to install all the packages im importing. Is it possible to avoid this?
Well, not really but kinda (TL;DR no, but depends on exactly what you mean). It really just boils down to being a limitation of the environment. Somewhere, someplace, you need the packages where you can grab them from disk -- it's as simple as that. They have to be available and locatable.
By available, I mean accessible by means of the filesystem. By locatable, I mean there has to somewhere you are looking. A system install would place it somewhere that would be accessible, and could be reliably used as a place to install, and look for, packages. This is part of the responsibility of your virtual environment. The only difference is, your virtual environment is there to separate you from your system Python's packages.
The advantage of this is straight forward: I can create a virtual environment that uses the package slamjam==1.2.3, where the 1.2.3 is a specific version of the package slamjam, and also run a program that uses slamjam==1.7.9 without causing a conflict in my global environment.
So here's why I give the "kinda" vibe: if your user already has a package on your system, then your user needs to install nothing. They don't need a virtual environment for that package if it's already globally installed on their system. Likewise, they don't need a new one if it's in another virtual environment, although it is a great idea to separate your projects dependencies with one.
Is it possible to embed the neuralnet .pb into the code? Keep in mind that it weighs 80mb, so an hex dump doesnt work (text file with the dump weighs 700 mb)
So, yeah, actually it's extremely doable. The thing is, it depends on how you mean.
Like you are aware, a hex dump of the file takes a lot of space. That's very true. But it seems that you are talking about raw hex, which for every byte takes 2 bytes at minimum. Then, you might be dumping out extra information with that if you used a tool like hexdump, yada, yada yada.
Moral of the story, you're going to waste a lot of space doing that. So I'll give you a couple options, of which you can choose one, or more.
Compress your data, even more, if it is possible.
I haven't worked with TensorFlow data, but after a quick read, it appears it uses compression with ProtoBufs, and it's probably pretty compressed already. Well, whatever, go ahead and see if you can squeeze any more juice out of the fruit.
Take binary data, and dump it into a different encoding (hint, hint: base64!)
Watch what happens when we convert something to hex...
>>> binary_data=b'this is a readable string, but really it just boils down to binary information. i can be expressed in a more efficient way than a binary string or hex, however'
>>> hex_data = binary_data.hex()
>>> print(hex_data)
746869732069732061207265616461626c6520737472696e672c20627574207265616c6c79206974206a75737420626f696c7320646f776e20746f2062696e61727920696e666f726d6174696f6e2e20692063616e2062652065787072657373656420696e2061206d6f726520656666696369656e7420776179207468616e20612062696e61727920737472696e67206f72206865782c20686f7765766572
>>> print(len(hex_data))
318
318 characters? We can do better.
>>> import base64
>>> hex_data = binary_data.hex()
>>> import base64
>>> b64_data = base64.b64encode(binary_data)
>>> print(b64_data)
b'dGhpcyBpcyBhIHJlYWRhYmxlIHN0cmluZywgYnV0IHJlYWxseSBpdCBqdXN0IGJvaWxzIGRvd24gdG8gYmluYXJ5IGluZm9ybWF0aW9uLiBpIGNhbiBiZSBleHByZXNzZWQgaW4gYSBtb3JlIGVmZmljaWVudCB3YXkgdGhhbiBhIGJpbmFyeSBzdHJpbmcgb3IgaGV4LCBob3dldmVy'
>>> print(len(b64_data))
212
You've now made your data smaller, by 33%!
Package a non-Python file with your .whl distribution. Yeah, totally doable. Have I done it before? Nope, never needed to yet. Will I ever? Yep. Do I have great advice on how to do it? No. But I have a link for you, it's totally doable.
You can download the file from within the application and only provide the URL. Something quick and easy, like
import wget
file_contents_in_memory = wget.download('some.site.com/a_file`)
Yeah, sure there are other libraries like requests which do similar things, but for the example, I chose wget because it's got a simple interface too, and is always an option.
The idea is to have 1 .py with everything necessary within. Is it possible?
Well, file, yeah. For what you're asking -- a .py file with nothing else that will install your packages? If you really want to copy and paste library after library and all the data into one massive file nobody will download, I'm sure there's a way.
Let's look at a more supported approach for what you're asking: a whl file is one file, and it can have an internal list of packages you need to install the .whl, which will handle doing everything for you (installing, unpacking, etc). I'd look in that direction.
Anyway, a lot of information I know, but there's some logic as to why you can or can't do something. Hoped it helped, and best of luck to you.
I would like to be able to use the services that the Blockcypher module provides for my programme, however i have (at least i think) downloaded the correct module package but cant get it to integrate with my Python on my Computer. I am fairly new to python so I have no idea on where to even start tackling this problem.
Modules, regardless of where you've got hold of them, will be searched for in the sys.path. If nothing is found there, they will be looked up in the current directory.
When you download some code directly it will be a good first guess to place it in the directory of the script from where you are using the download. If it's just a .py-file, place it there. If it's an archive with a directory, then place the directory there (not the files).
Generally, you should prefer installing modules via a package manager such as pip or conda. Such package managers take care of placing modules properly for usage with your Python installation from wherever you will write your script. They also provide support for updating these modules to newer versions later.
Update: If you cannot make anything from this remarks, you should first read the section on modules in the Python tutorial, or even work thru the full tutorial or thru a good book (or any other ;) to get a smooth entry into the friendly world of Python programming.
Update (2023): The Dive Into Python link above is outdated, so here is the updated link to this great resource:
https://diveintopython3.problemsolving.io
I think it's still the best beginner's resource, but, well, here are many more:
https://wiki.python.org/moin/IntroductoryBooks
I'm working on an Inno Setup installer for a Python application for Windows 7, and I have these requirements:
The app shouldn't write anything to the installation directory
It should be able to use .pyc files
The app shouldn't require a specific Python version, so I can't just add a set of .pyc files to the installer
Is there a recommended way of handling this? Like give the user a way to (re)generate the .pyc files? Or is the shorter startup time benefit from the .pyc files usually not worth worrying about?
PYC files aren't guaranteed to be compatible for different python versions. If you don't know that all your customers are running the same python versions, you really don't want to distribute pyc's directly. So, you have to choose between distributing PYCs and supporting multiple python versions.
You could create build process that compiles all your files using py_compile and zips them up into a version-specific package. You can do this with setuptools.; however it will be awkward to do because you'll have to run py_compile in every version you need to support.
If you are basically distributing a closed application and don't want people to have trivial access to your source code, then py2exe is probably a simpler alternative. If your python is supposed to be integrated into the user's python install, then it's probably simpler to just create a zip of your .py files and add a one-line .py stub that imports the zipped package(s) using zipfile
if it makes you feel better, PYC doesn't provide much extra security and it doesn't really boost perf much either :)
If you haven't read PEP 3147, that will probably answer your questions.
I don't mean the solution described in that PEP and implemented as of Python 3.2. That's great if your "multiple Python versions" just means "3.2, 3.3, and probably future 3.x". Or even if it means "2.6+ and 3.1+, but I only really care about 3.2 and 3.3, so if I don't get the pyc speedups for other ones that's OK".
But when I asked your supported versions, you said, "2.7", which means you can't rely on PEP 3147 to solve your problems.
Fortunately, the PEP is full of discussion of earlier attempts to solve the problem, and the pitfalls of each, and there should be more than enough there to figure out what the options are and how to implement them.
The one problem is that the PEP is very linux-centricāmainly because it's primarily linux distros that tried to solve the problem in the past. (Apple also did so, but their solution was (a) pretty much working, and (b) tightly coupled with the whole Mac-specific "framework" thing, so they were mostly ignoredā¦)
So, it largely leaves open the question of "Where should I put the .pyc files on Windows?"
The best choice is probably an app-specific directory under the user's local application data directory. See Known Folders if you can require Vista or later, CSIDL if you can't. Either way, you're looking for the FOLDERID_LocalAppData or CSIDL_LOCAL_APPDATA, which is:
The file system directory that serves as a data repository for local (nonroaming) applications. A typical path is C:\Documents and Settings\username\Local Settings\Application Data.
The point is that it's a place for applications to store data that's separate for each user (and inside that user's profile directory), and also for each machine the user's roaming profile might end up on, which means you can safely put stuff there and know that the user has the permissions to write there without UAC getting involved, and also know (as well as you ever can) that no other user or machine will interfere with what's there.
Within that directory, you create a directory for your program, and put whatever you want there, and as long as you picked a unique name (e.g., My Unique App Name or My Company Name\My App Name or a UUID), you're safe from accidental collision with other programs. (There used to be specific guidelines on this in MSDN, but I can no longer find them.)
So, how do you get to that directory?
The easiest way is to just use the env variable %LOCALAPPDATA%. If you need to deal with older Windows, you can use %USERPROFILE% and tack \Local Settings\Application Data onto the end, which is guaranteed to either be the same, or end up in the same place via junctions.
You can also use pywin32 or ctypes to access the native Windows APIs (since there are at least 3 different APIs for this and at least two ways to access those APIs, I don't want to give all possible ways to write thisā¦Ā but a quick google or SO search for "pywin32 SHGetFolderPath" or "ctypes SHGetKnownFolderPath" or whatever should give you what you need).
Or, there are multiple third-party modules to handle this. The first one both Google and PyPI turned up was winshell.
Re-reading the original question, there's a much simpler answer that probably fits your requirements.
I don't know much about Inno, but most installers give you a way to run an arbitrary command as a post-copy step.
So, you can just use python -m compileall to create the .pyc files for you at install timeāwhile you've still got elevated privileges, so there's no problem with UAC.
In fact, if you look at pywin32, and various other Python packages that come as installer packages, they do exactly this. This is an idiomatic thing to do for installing libraries into the user's Python installation, so I don't see why it wouldn't be considered reasonable for installing an executable that uses the user's Python installation.
Of course if the user later decides to uninstall Python 2.6 and install 2.7, your .pyc files will be hosedā¦ but from your description, it sounds like your entire program will be hosed anyway, and the recommended solution for the user would probably be to uninstall and reinstall anyway, right?
This is something that I think would be very useful. Basically, I'd like there to be a way to edit Python source programmatically without requiring human intervention. There are a couple of things I would like to do with this:
Edit the configuration of Python apps that use source modules for configuration.
Set up a "template" so that I can customize a Python source file on the fly. This way, I can set up a "project" system on an open source app I'm working on and allow certain files to be customized.
I could probably write something that can do this myself, but I can see that opening up a lot of "devil's in the details" type issues. Are there any ways to do this currently, or am I just going to have to bite the bullet and implement it myself?
Python's standard library provides pretty good facilities for working with Python source; note the tokenize and parser modules.
Most of these kinds of things can be determined programatically in Python, using modules like sys, os, and the special _file_ identifier which tells you where you are in the filesystem path.
It's important to keep in mind that when a module is first imported it will execute everything in the file-scope, which is important for developing system-dependent behaviors. For example, the os module basically determines what operating system you're using on import and then adjusts its implementation accordingly (by importing another module corresponding to Linux, OSX, Windows, etc.).
There's a lot of power in this feature and something along these lines is probably what you're looking for. :)
[Edit] I've also used socket.gethostname() in some rare, hackish instances. ;)
I had the same issue and I simply opened the file and did some replace: then reload the file in the Python interpreter. This works fine and is easy to do.
Otherwise AFAIK you have to use some conf objects.