Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I've seen a number of postgresql modules for python like pygresql, pypgsql, psyco. Most of them are Python DB API 2.0 compliant, some are not being actively developed anymore.
Which module do you recommend? Why?
psycopg2 seems to be the most popular. I've never had any trouble with it. There's actually a pure Python interface for PostgreSQL too, called bpgsql. I wouldn't recommend it over psycopg2, but it's recently become capable enough to support Django and is useful if you can't compile C modules.
I suggest Psycopg over Psycopg2 since the first one seems a bit more sable. At least in my experience. I have an application running 24/7 and sometimes I was getting random memory crashes (double free or corruption errors) from Psycopg2. Nothing I could have debug fast or easily since it's not a Python error but a C error. I just switched over to Pyscopg and I did not get any crashes after that.
Also, as said in an other post, bpgsql seems a very good alternative. It's stable and easy to use since you don't need to compile it. The only bad side is that library is not threadsafe.
Pygresql seems nice, there's a more direct way to query the database with this library. But I don't know how stable this one is though.
I've used pg8000 without any problems in the past 3 years. It's uptodate and available on pypi and works on both python2 and python3. You can use "pip install pg8000" to quickly get it (don't forget to use --proxy=yourproxy:yourport if you're behind a firewall).
If you're worried about thread safety, it also provides a score for thread safety (see: http://pybrary.net/pg8000/dbapi.html and https://www.python.org/dev/peps/pep-0249/ for definitions of the different levels of thread safety) (although I have no yet used threads with psql).
In my experience, psycopg2 is the most used library for this. Like you said, it's DB API 2.0 compliant, wich gives a solid interface to work with.
For those who find the standard API to be a little too verbose and hard to work with, I made a small library that might help:
https://github.com/hugollm/rebel
Psycopg1 is known for better performance in heavilyy threaded environments (like web applications) than Psycopg2, although not maintained. Both are well written and rock solid, I'd choose one of these two depending on use case.
I uses only psycopg2 and had no problems with that.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I am creating a docker image which is going to be based on buildpack-deps:stretch based image. I am told that it is preferred to install a Python version which was built from source instead of installing pre-built binaries.
Question: Why is it preferred for Python to be built from source?
I read some articles talk about able to get latest patches etc., but
that does not make much sense to me since the pre-built binaries
would also have the patches after some testing is done. You ideally
do not want to take in source code which is not tested and build and
use it. Am I missing something in my argument?
If Python has pre-built binaries for debian:stretch, in my scenario why should I
be prefer to build from source instead of pre-built binaries?
Thanks in advance!
I am told that it is preferred to install a Python version which was built from source instead of installing pre-built binaries.
Without a qualification why, that was not a very meaningful suggestion. There are reasons to go either way, so I'll list those instead.
Reasons to use system packages:
automatic system updates will take care of emergency updates and security patching in a standard way - you can save a lot of time
you can be fairly confident that everything's compiled and distributed in a consistent way without breaking applications which rely on a given version (for example 3.5)
not managing your own compilation saves you time in both development and in build/release process
you don't have to track upstream point-releases using a custom process
Reasons to compile yourself:
if you want to rely on a specific patch not included upstream
if you need to use new version, not released in your distro
In general, unless you can specifically say what you're going to gain by compiling from sources, understand you're committing to doing that on every future release, and hopefully define how you're going to automate it and who's going to do the work/maintenance - I don't see a reason to do it.
In addition to #viraptor's excellent advice for installation on systems, containerization adds some things which change the equation a bit.
For any container, building it yourself ensures more knowledge of what's going into it, which usually means a more secure container. It's also usually very easy to use the more minimal alpine containers (or to port an application to them) which means there's less surface area for security holes in the first place, and they're physically smaller which means less local maintenance and lower hosting costs.
Python can easily be built on alpine, there are even official containers maintained by the Docker company, whose source can be forked to build your own. Dockerhub will even build it for you from a GitHub repository if you configure it to.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
What is the reason packages are distributed separately?
Why do we have separate 'add-on' packages like pandas, numpy?
Since these modules seem so important, why are these not part of Python itself?
Are the "single distributions" of Python to come pre-loaded?
If it's part of design to keep the 'core' separate from additional functionality, still in that case it should at least come 'pre-imported' as soon as you start Python.
Where can I find such distributions if they exist?
Many of these tools, including core Python, are separately developed and distributed by different team, so it is up to aggregators to curate them and put them into a single distribution. Here are some notable examples:
The Anaconda distribution from Continuum Analytics
The Canopy distribution from Enthought
ActivePython from ActiveState
PythonXY for scientific programming
WinPython for Windows
PyIMSL Studio from Rogue Wave Software
This is a bit like asking "Why doesn't every motor come with a car around it?"
While a car without a motor is pretty useless, the inverse doesn't hold: Most motors aren't even used for cars. Of course one could try selling a complete car to people who want to have a generator, but they wouldn't buy it.
Also the people designing cars might not be the best to build a motor and vice versa.
Similarly with python. Most python distributions are not used with numpy, scipy or pandas. Distributing python with those packages would create a massive overhead.
However, there is of course a strong demand for prebuilt distributions which combine those modules with a respective python and make sure everything interacts smoothly. Some examples are Anaconda, Canopy, python(x,y), winpython, etc. So an end user who simply wants a car that runs, best chooses one of those, instead of installing everything from scratch. Other users who do want to always have the newest version of everything might choose to tinker them together themselves.
You can make the interactive interpreted launch with "pre-imported" modules, as well as with pre-run code, using The Interactive start-up file.
Alternatively, you can use The Customization modules to pre-run code on every invocation of python.
Regarding whether pandas and numpy should be part of the standard library - it's a matter of opinion.
PyPi currently has over 100,000 libraries available. I'm sure someone thinks each of these is important.
Why do you need or want to pre-load libraries, considering how easy a pip install is especially in a virtual environment?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm working on a large-ish project. I currently have some functional tests hacked together in shell scripts. They work, but I'd like to make them a little bit more complicated. I think it will be painful to do in bash, but easy in a full-blown scripting language. The target language of the project is not appropriate for implementing the tests.
I'm the only person who will be working on my branch for the foreseeable future, so it isn't a huge deal if my tests don't work for others. But I'd like to avoid committing code that's going to be useless to people working on this project in the future.
In terms of test harness "just working" for the largest number of current and future contributors to this project as possible, am I better off porting my shell scripts to Python or Perl?
I suspect Perl, since I know that the default version of Python installed (if any) varies widely across OSes, but I'm not sure if that's just because I'm less familiar with Perl.
Every modern Linux or Unixlike system comes with both installed; which you choose is a matter of taste. Just pick one.
I would say that whatever you choose, you should try to write easily-readable code in it, even if you're the only one who will be reading that code. Now, Pythonists will tell you that writing readable code is easier in Python, which may be true, but it's certainly doable in Perl as well.
This is more personal, but I always used python and will use it till something shows me it's better to use other. Its simple, very expansible, strong and fast and supported by a lot of users.
The version of python doesn't matter much since you can update it and most OSes have python 2.5 with expands the compatibility a lot. Perl is also included in Linux and Mac OS, though.
I think that Python will be good, but if you like perl and have always worked with it, just use perl and don't complicate your life.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
What's the ideal Python version for a beginner to start learning Python? I need to recommend some newbies a programming language to learn and I chose Python. I'm still not sure which version.
It depends what you're going to do with it.
Unicode handling has vastly improved in Python 3. So if you intend to use this for building web pages or some such, Python 3 might be the obvious choice.
On the other hand, many libraries and frameworks still only support Python 2. For example, the numerical processing library numpy, and the web framework Django both only work on Python 2. So if you intend to use any of those, stick with Python 2.
Either way, the differences aren't huge to begin with. I'd say Python 3 is a little easier to pick up (due to its string handling), but that is a good reason to learn Python 2 first. That way, if you run into a piece of Python 2 code (and you will), you'll know what is going on.
Adoption of Python3 has been held up by a few critical 3rd party packages. numpy is a good example of a package that has just barely started working on Python3. Quite a few other packages depend on numpy, so they will hopefully be supporting Python3 very shortly too.
Most of the time it's possible to write code that is compatible with 2.6/2.7/3.1 by using __future__ imports. So learning one does not mean you are not learning the other.
My vote is for 3.1
My reasoning is simple and selfish. The more new python programmers that only use 3.1 there are, the more likely it is that one of them is going to decide that they need some library from 2.6 and port it to 3.1 (learning 2.6 in the process I might add).
After this happens, I can start using 3.1: it looks really cool.
I would suggest Python 2.6; I know it's old, but it's not only the current standard, and there is way more documentation and libraries available for it.
I'll throw my experience into the works:
Right now you should be using 2.6. Switch to 2.7 when 2.7.1 comes out. Switch to 3.1/2 when all the libraries you want are fully supported and stable there.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I write tons of python scripts, and I find myself reusing lots code that I've written for other projects. My solution has been to make sure the code is separated into logical modules/packages (this one's a given). I then make them setuptools-aware and publish them on PyPI. This allows my other scripts to always have the most up-to-date code, I get a warm fuzzy feeling because I'm not repeating myself, and my development, in general, is made less complicated. I also feel good that there MAY be someone out there that finds my code handy for something they're working on, but it's mainly for selfish reasons :)
To all the pythonistas, how do you handle this? Do you use PyPI or setuptools (easy_install)? or something else?
I have been doing the same thing. Extract common functionality, pretty the code up with extra documentation and unit tests/ doctests, create an easy_install setup.py, and then release on PyPi. Recently, I created a single Google Code site where I manage the source and keep the wiki up to date.
What kind of modules are we talking about here? If you're planning on distributing your projects to other python developers, setuptools is great. But it's usually not a very good way to distribute apps to end users. Your best bet in the latter case is to tailor your packaging to the platforms you're distributing it for. Sure, it's a pain, but it makes life for end users far easier.
For example, in my Debian system, I usually don't use easy_install because it is a little bit more difficult to get eggs to work well with the package manager. In OS X and windows, you'd probably want to package everything up using py2app and py2exe respectively. This makes life for the end user better. After all, they shouldn't know or care what language your scripts are written in. They just need them to install.
I store it all offline in a logical directory structure, with commonly used modules grouped as utilities. This means it's easier to control which versions I publish, and manage. I also automate the build process to interpret the logical directory structure.