Python cannot see installed module `news` - python

Python declares that the news module is not installed:
$ python -c "import news"
Traceback (most recent call last):
File "<string>", line 1, in <module>
ImportError: No module named news
Even though:
$ pip show news
Name: news
Version: 1.0
Summary: my first python module
Home-page: UNKNOWN
Author: sang
Author-email: 1975001828#qq.com
License: UNKNOWN
Location: /usr/lib/python2.7/site-packages
Requires:
$ python -V
Python 2.7.10
$ echo $PYTHONPATH
:/usr/lib/python2.7/site-packages
$ python -c "import sys; print sys.path"
['', '/usr/lib/python2.7/site-packages/pydns-2.3.6-py2.7.egg', '/usr/lib/python2.7/site-packages/xmltodict-0.10.2-py2.7.egg', '/usr/lib/python2.7/site-packages/spambayes-1.1b1-py2.7.egg', '/usr/lib/python2.7/site-packages/pydns-2.3.6-py2.7.egg', '/usr/lib/python2.7/site-packages/lockfile-0.11.0-py2.7.egg', '/usr/lib/python2.7/site-packages/FinancialFundamentals-0.2.3-py2.7.egg', '/usr/lib/python2.7/site-packages/vector_cache-0.1.0-py2.7.egg', '/usr/lib/python2.7/site-packages/python_dateutil-1.5-py2.7.egg', '/usr/lib/python2.7/site-packages/blist-1.3.6-py2.7-cygwin-2.5.2-i686.egg', '/usr/lib/python2.7/site-packages/xmltodict-0.10.2-py2.7.egg', '/usr/lib/python2.7/site-packages/buildozer-0.33.dev0-py2.7.egg', '/home/Administrator/python/scrapping/guru_steve_avon', '/usr/lib/python2.7/site-packages', '/usr/lib/python27.zip', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-cygwin', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/home/Administrator/.local/lib/python2.7/site-packages', '/usr/lib/python2.7/site-packages/PIL', '/usr/lib/python2.7/site-packages/gst-0.10', '/usr/lib/python2.7/site-packages/gtk-2.0']
Any suggestions what can be done to get news to be recognized?
Edit 1, in reply to #JacobIRR:
news was installed - without complaint - with pip (see below), but there does not seem to be a news subdirectory in site-packages.
$ ls -lsad /usr/lib/python2.7/site-packages/n*
0 drwxr-xr-x+ 1 Administrator None 0 Jun 4 2015 /usr/lib/python2.7/site-packages/ndg
1 -rw-r--r-- 1 Administrator None 297 Jun 4 2015 /usr/lib/python2.7/site-packages/ndg_httpsclient-0.4.0-py2.7-nspkg.pth
4 drwxr-xr-x+ 1 Administrator None 0 Jun 4 2015 /usr/lib/python2.7/site-packages/ndg_httpsclient-0.4.0-py2.7.egg-info
4 drwxr-xr-x+ 1 Administrator None 0 Feb 25 2016 /usr/lib/python2.7/site-packages/netsnmp
4 drwxr-xr-x+ 1 Administrator None 0 Feb 25 2016 /usr/lib/python2.7/site-packages/netsnmp_python-1.0a1-py2.7.egg-info
4 drwxr-xr-x+ 1 Administrator None 0 Apr 26 2016 /usr/lib/python2.7/site-packages/networkx
4 drwxr-xr-x+ 1 Administrator None 0 Apr 26 2016 /usr/lib/python2.7/site-packages/networkx-1.11.dist-info
4 drwxr-xr-x+ 1 Administrator None 0 Apr 8 21:28 /usr/lib/python2.7/site-packages/news-1.0.dist-info
1 -rw-r--r-- 1 Administrator None 154 Apr 8 21:28 /usr/lib/python2.7/site-packages/news_module.py
1 -rw-r--r-- 1 Administrator None 457 Apr 8 21:28 /usr/lib/python2.7/site-packages/news_module.pyc
12 drwxr-xr-x+ 1 Administrator None 0 Sep 27 2015 /usr/lib/python2.7/site-packages/nltk
4 drwxr-xr-x+ 1 Administrator None 0 Sep 27 2015 /usr/lib/python2.7/site-packages/nltk-3.0.5-py2.7.egg-info
0 drwxr-xr-x+ 1 Administrator None 0 Apr 13 2015 /usr/lib/python2.7/site-packages/numba
0 drwxr-xr-x+ 1 Administrator None 0 Apr 13 2015 /usr/lib/python2.7/site-packages/numba-0.18.2-py2.7.egg-info
0 drwxr-xr-x+ 1 Administrator None 0 Feb 25 2016 /usr/lib/python2.7/site-packages/numpy
0 drwxr-xr-x+ 1 Administrator None 0 Apr 26 2016 /usr/lib/python2.7/site-packages/numpy-1.11.0.dist-info
0 drwxr-xr-x+ 1 Administrator None 0 Apr 9 2015 /usr/lib/python2.7/site-packages/numpy-1.9.2-py2.7.egg-infoold-1
0 drwxr-xr-x+ 1 Administrator None 0 Jun 18 2015 /usr/lib/python2.7/site-packages/numpy-1.9.2-py2.7.egg-infoold-2
$ pip uninstall news
Uninstalling news-1.0:
/usr/lib/python2.7/site-packages/news-1.0.dist-info/DESCRIPTION.rst
/usr/lib/python2.7/site-packages/news-1.0.dist-info/INSTALLER
/usr/lib/python2.7/site-packages/news-1.0.dist-info/METADATA
/usr/lib/python2.7/site-packages/news-1.0.dist-info/RECORD
/usr/lib/python2.7/site-packages/news-1.0.dist-info/WHEEL
/usr/lib/python2.7/site-packages/news-1.0.dist-info/metadata.json
/usr/lib/python2.7/site-packages/news-1.0.dist-info/top_level.txt
/usr/lib/python2.7/site-packages/news_module.py
/usr/lib/python2.7/site-packages/news_module.pyc
Proceed (y/n)? y
Successfully uninstalled news-1.0
$ pip install news
Collecting news
Installing collected packages: news
Successfully installed news-1.0

Well, I downloaded the module from pypi to see what is inside it. The news-1.0.zip has only one file called news_module.py and has one method read_news in it.
So, you should try
$ python -c "import news_module"
Q: Are you looking for newspaper module instead?

Related

FileNotFoundError error while trying to rename the symlinks using os.rename [duplicate]

I would like to know is it possible to rename a symlink with python.
Already tried os.rename and shutil.move
Any ideas?
os.rename return me this error : OSError: [Errno 18] Cross-device link
>>> import sys, os
>>>
>>> path = '/Library/Application Support/appsolute/MAMP PRO/db/'
>>> job = path + 'mysql-job/'
>>> perso = path + 'mysql-perso/'
>>> mysql = path + 'mysql/'
>>>
>>> os.rename(mysql, job)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OSError: [Errno 18] Cross-device link
>>> exit()
Danny-Dombrowski:script ddombrowski$ ls -al /Library/Application\ Support/appsolute/MAMP\ PRO/db/
total 24
drwxrwxr-x 5 root admin 170 7 fév 19:29 .
drwxrwxr-x 12 root admin 408 7 fév 17:14 ..
-rw-r--r--# 1 ddombrowski admin 6148 7 fév 19:29 .DS_Store
lrwxr-xr-x 1 ddombrowski admin 46 7 fév 19:29 mysql -> /Volumes/Gestion Portail Sante/Database/mysql/
drwxrwxr-x 11 ddombrowski admin 374 7 fév 19:22 mysql-perso
os.rename should work.
xupeng#xupeng t $ ls -l
total 0
-rw-r--r-- 1 xupeng xupeng 0 Feb 8 08:22 a
lrwxrwxrwx 1 xupeng xupeng 1 Feb 8 08:23 b -> a
xupeng#xupeng t $ python
Python 2.6.5 (release26-maint, Sep 21 2011, 10:32:38)
[GCC 4.3.4] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> os.rename('b', 'c')
xupeng#xupeng t $ ls -l
total 0
-rw-r--r-- 1 xupeng xupeng 0 Feb 8 08:22 a
lrwxrwxrwx 1 xupeng xupeng 1 Feb 8 08:23 c -> a
os.rename will work fine:
$ ln -s target link
$ python -c "import os; os.rename('link', 'link.new')"
$ ls -l link.new
lrwxrwxrwx 1 phihag phihag 6 Feb 8 01:25 link.new -> target
Make sure you don't include a / after the symlink: link/ is the same as link/., and not the same as link.

python compressed 4Gb bz2 EOFError: end of stream was already found nested subfolders

I'm trying to read a specific file from a compressed file bz2 using python.
tar = tarfile.open(filename, "r|bz2", bufsize=57860311)
for tarinfo in tar:
print tarinfo.name, "is", tarinfo.size, "bytes in size and is",
if tarinfo.isreg():
print "a regular file."
# read the file
f = tar.extractfile(tarinfo)
#print f.read()
elif tarinfo.isdir():
print "a directory."
else:
print "something else."
tar.close()
But at the end I got the error:
/usr/local/Cellar/python#2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tarfile.pyc in read(self, size)
577 buf = "".join(t)
578 else:
--> 579 buf = self._read(size)
580 self.pos += len(buf)
581 return buf
/usr/local/Cellar/python#2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tarfile.pyc in _read(self, size)
594 break
595 try:
--> 596 buf = self.cmp.decompress(buf)
597 except IOError:
598 raise ReadError("invalid compressed data")
EOFError: end of stream was already found
I also tried to list the files within the tar through 'tar.list()' and again ...
-rwxr-xr-x lindauer/or3uunp 0 2013-05-21 00:58:36 r3.2/
-rw-r--r-- lindauer/or3uunp 6057 2012-01-05 14:41:00 r3.2/readme.txt
-rw-r--r-- lindauer/or3uunp 44732 2012-01-04 10:08:54 r3.2/psychometric.csv
-rw-r--r-- lindauer/or3uunp 57860309 2012-01-04 09:58:20 r3.2/logon.csv
/usr/local/Cellar/python#2/2.7.15_1/Frameworks/Python.framework/Versions/2.7/lib/python2.7/tarfile.pyc in _read(self, size)
594 break
595 try:
--> 596 buf = self.cmp.decompress(buf)
597 except IOError:
598 raise ReadError("invalid compressed data")
EOFError: end of stream was already found
I listed the files inside the archive using the tar command. Here is the result:
tar -tvf r3.2.tar.bz2
drwxr-xr-x 0 lindauer or3uunp 0 May 21 2013 r3.2/
-rw-r--r-- 0 lindauer or3uunp 6057 Jan 5 2012 r3.2/readme.txt
-rw-r--r-- 0 lindauer or3uunp 44732 Jan 4 2012 r3.2/psychometric.csv
-rw-r--r-- 0 lindauer or3uunp 57860309 Jan 4 2012 r3.2/logon.csv
-rw-r--r-- 0 lindauer or3uunp 12494829865 Jan 5 2012 r3.2/http.csv
-rw-r--r-- 0 lindauer or3uunp 1066622500 Jan 5 2012 r3.2/email.csv
-rw-r--r-- 0 lindauer or3uunp 218962503 Jan 5 2012 r3.2/file.csv
-rw-r--r-- 0 lindauer or3uunp 29156988 Jan 4 2012 r3.2/device.csv
drwxr-xr-x 0 lindauer or3uunp 0 May 20 2013 r3.2/LDAP/
-rw-r--r-- 0 lindauer or3uunp 140956 Jan 4 2012 r3.2/LDAP/2011-01.csv
-rw-r--r-- 0 lindauer or3uunp 147370 Jan 4 2012 r3.2/LDAP/2010-05.csv
-rw-r--r-- 0 lindauer or3uunp 149221 Jan 4 2012 r3.2/LDAP/2010-02.csv
-rw-r--r-- 0 lindauer or3uunp 141717 Jan 4 2012 r3.2/LDAP/2010-12.csv
-rw-r--r-- 0 lindauer or3uunp 148931 Jan 4 2012 r3.2/LDAP/2010-03.csv
-rw-r--r-- 0 lindauer or3uunp 147370 Jan 4 2012 r3.2/LDAP/2010-04.csv
-rw-r--r-- 0 lindauer or3uunp 149793 Jan 4 2012 r3.2/LDAP/2009-12.csv
-rw-r--r-- 0 lindauer or3uunp 143979 Jan 4 2012 r3.2/LDAP/2010-09.csv
-rw-r--r-- 0 lindauer or3uunp 145591 Jan 4 2012 r3.2/LDAP/2010-07.csv
-rw-r--r-- 0 lindauer or3uunp 139444 Jan 4 2012 r3.2/LDAP/2011-03.csv
-rw-r--r-- 0 lindauer or3uunp 142347 Jan 4 2012 r3.2/LDAP/2010-11.csv
-rw-r--r-- 0 lindauer or3uunp 138285 Jan 4 2012 r3.2/LDAP/2011-04.csv
-rw-r--r-- 0 lindauer or3uunp 149793 Jan 4 2012 r3.2/LDAP/2010-01.csv
-rw-r--r-- 0 lindauer or3uunp 146008 Jan 4 2012 r3.2/LDAP/2010-06.csv
-rw-r--r-- 0 lindauer or3uunp 144711 Jan 4 2012 r3.2/LDAP/2010-08.csv
-rw-r--r-- 0 lindauer or3uunp 137967 Jan 4 2012 r3.2/LDAP/2011-05.csv
-rw-r--r-- 0 lindauer or3uunp 140085 Jan 4 2012 r3.2/LDAP/2011-02.csv
-rw-r--r-- 0 lindauer or3uunp 143420 Jan 4 2012 r3.2/LDAP/2010-10.csv
-r--r--r-- 0 lindauer or3uunp 3923 Jan 4 2012 r3.2/license.txt
I think this is due to the fact the archive has subfolders and for some reason python libraries have problems in dealing with subfolders extractions?
I also tried to open the tar file manually and I have no problems so I don't think the file is corrupted. Any help appreciated.
Comment: I tried the debug=3 and I get : ReadError: bad checksum
Found the following related Infos:
tar: directory checksum error
Cause
This error message from tar(1) indicates that the checksum of the directory and the files it has read from tape does not match the checksum advertised in the header block. Usually this message indicates the wrong blocking factor, although it could indicate corrupt data on tape.
Action
To resolve this problem, make certain that the blocking factor you specify on the command line (after -b) matches the blocking factor originally specified. If in doubt, leave out the block size and let tar(1) determine it automatically. If that remedy does not help, the tape data could be corrupted.
SE:tar-ignore-or-fix-checksum
I'd try the -i switch to see if you can just ignore and messages regarding EOF.
-i, --ignore-zeros ignore zeroed blocks in archive (means EOF)
Example
$ tar xivf backup.tar
bugs.python.org:tarfile-headererror
The comment in tarfile.py reads (Don't know the date of the file!):
- # We shouldn't rely on this checksum, because some tar programs
- # calculate it differently and it is merely validating the
- # header block.
ReadError: unexpected end of data
From the tarfile Documentation
The tarfile module defines the following exceptions:
exception tarfile.ReadError
Is raised when a tar archive is opened, that either cannot be handled by the tarfile module or is somehow invalid.
First, try with another tar archiv file to verify your python environent.
Second, check if your tar archiv file match the following format:
tarfile.DEFAULT_FORMAT
The default format for creating archives. This is currently GNU_FORMAT.
Third, instead of using tarfile.open(...), to create a tarfile instance, try to use the following, to set debug=3.
tar = tarfile.TarFile(name=filename, debug=3)
tar.open()
...
class tarfile.TarFile(name=None, mode='r', fileobj=None, format=DEFAULT_FORMAT, tarinfo=TarInfo, dereference=False, ignore_zeros=False, encoding=ENCODING, errors='surrogateescape', pax_headers=None, debug=0, errorlevel=0)

Pandas - Optimal persistence strategy for highest compression ratio?

Question
Given a large series of DataFrames with a small variety of dtypes, what is the optimal design for Pandas DataFrame persistence/serialization if I care about compression ratio first, decompression speed second, and initial compression speed third?
Background:
I have roughly 200k dataframes of shape [2900,8] that I need to store in logical blocks of ~50 data frames per file. The data frame contains variables of type np.int8, np.float64. Most data frames are good candidates for sparse types, but sparse is not supported in HDF 'table' format stores (not that it would even help - see the size below for a sparse gzipped pickle). Data is generated daily and currently adds up to over 20GB. While I'm not bound to HDF, I have yet to find a better solution that allows for reads on individual dataframes within the persistent store, combined with top quality compression. Again, I'm willing to sacrifice a little speed for better compression ratios, especially since I will need to be sending this all over the wire.
There are a couple of other SO threads and other links that might be relevant for those that are in a similar position. However most of what I've found doesn't focus on minimizing storage size as a priority:
“Large data” work flows using pandas
HDF5 and SQLite. Concurrency, compression & I/O performance [closed]
Environment:
OSX 10.9.5
Pandas 14.1
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
PyTables version: 3.1.1
HDF5 version: 1.8.13
NumPy version: 1.8.1
Numexpr version: 2.4 (not using Intel's VML/MKL)
Zlib version: 1.2.5 (in Python interpreter)
LZO version: 2.06 (Aug 12 2011)
BZIP2 version: 1.0.6 (6-Sept-2010)
Blosc version: 1.3.5 (2014-03-22)
Blosc compressors: ['blosclz', 'lz4', 'lz4hc', 'snappy', 'zlib']
Cython version: 0.20.2
Python version: 2.7.8 (default, Jul 2 2014, 10:14:46)
[GCC 4.2.1 Compatible Apple LLVM 5.1 (clang-503.0.40)]
Platform: Darwin-13.4.0-x86_64-i386-64bit
Byte-ordering: little
Detected cores: 8
Default encoding: ascii
Default locale: (en_US, UTF-8)
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Example:
import pandas as pd
import numpy as np
import random
import cPickle as pickle
import gzip
def generate_data():
alldfs = {}
n = 2800
m = 8
loops = 50
idx = pd.date_range('1/1/1980',periods=n,freq='D')
for x in xrange(loops):
id = "id_%s" % x
df = pd.DataFrame(np.random.randn(n,m) * 100,index=idx)
# adjust data a bit..
df.ix[:,0] = 0
df.ix[:,1] = 0
for y in xrange(100):
i = random.randrange(n-1)
j = random.randrange(n-1)
df.ix[i,0] = 1
df.ix[j,1] = 1
df.ix[:,0] = df.ix[:,0].astype(np.int8) # adjust datatype
df.ix[:,1] = df.ix[:,1].astype(np.int8)
alldfs[id] = df
return alldfs
def store_all_hdf(x,format='table',complevel=9,complib='blosc'):
fn = "test_%s_%s-%s.hdf" % (format,complib,complevel)
hdfs = pd.HDFStore(fn,mode='w',format=format,complevel=complevel,complib=complib)
for key in x.keys():
df = x[key]
hdfs.put(key,df,format=format,append=False)
hdfs.close()
alldfs = generate_data()
for format in ['table','fixed']:
for complib in ['blosc','zlib','bzip2','lzo',None]:
store_all_hdf(alldfs,format=format,complib=complib,complevel=9)
# pickle, for comparison
with open('test_pickle.pkl','wb') as f:
pickle.dump(alldfs,f)
with gzip.open('test_pickle_gzip.pklz','wb') as f:
pickle.dump(alldfs,f)
with gzip.open('test_pickle_gzip_sparse.pklz','wb') as f:
sparsedfs = {}
for key in alldfs.keys():
sdf = alldfs[key].to_sparse(fill_value=0)
sparsedfs[key] = sdf
pickle.dump(sparsedfs,f)
Results
-rw-r--r-- 1 bazel staff 10292760 Oct 17 14:31 test_fixed_None-9.hdf
-rw-r--r-- 1 bazel staff 9531607 Oct 17 14:31 test_fixed_blosc-9.hdf
-rw-r--r-- 1 bazel staff 7867786 Oct 17 14:31 test_fixed_bzip2-9.hdf
-rw-r--r-- 1 bazel staff 9506483 Oct 17 14:31 test_fixed_lzo-9.hdf
-rw-r--r-- 1 bazel staff 8036845 Oct 17 14:31 test_fixed_zlib-9.hdf
-rw-r--r-- 1 bazel staff 26627915 Oct 17 14:31 test_pickle.pkl
-rw-r--r-- 1 bazel staff 8752370 Oct 17 14:32 test_pickle_gzip.pklz
-rw-r--r-- 1 bazel staff 8407704 Oct 17 14:32 test_pickle_gzip_sparse.pklz
-rw-r--r-- 1 bazel staff 14464924 Oct 17 14:31 test_table_None-9.hdf
-rw-r--r-- 1 bazel staff 8619016 Oct 17 14:31 test_table_blosc-9.hdf
-rw-r--r-- 1 bazel staff 8154716 Oct 17 14:31 test_table_bzip2-9.hdf
-rw-r--r-- 1 bazel staff 8481631 Oct 17 14:31 test_table_lzo-9.hdf
-rw-r--r-- 1 bazel staff 8047125 Oct 17 14:31 test_table_zlib-9.hdf
Given the results above, the best 'compression-first' solution appears to be to store the data in HDF fixed format, with bzip2. Is there a better way of organising the data, perhaps without HDF, that would allow me to save even more space?
Update 1
Per the comment below from Jeff, I have used ptrepack on the table store HDF file without initial compression -- and then recompressed. Results are below:
-rw-r--r-- 1 bazel staff 8627220 Oct 18 08:40 test_table_repack-blocsc-9.hdf
-rw-r--r-- 1 bazel staff 8627620 Oct 18 09:07 test_table_repack-blocsc-blosclz-9.hdf
-rw-r--r-- 1 bazel staff 8409221 Oct 18 08:41 test_table_repack-blocsc-lz4-9.hdf
-rw-r--r-- 1 bazel staff 8104142 Oct 18 08:42 test_table_repack-blocsc-lz4hc-9.hdf
-rw-r--r-- 1 bazel staff 14475444 Oct 18 09:05 test_table_repack-blocsc-snappy-9.hdf
-rw-r--r-- 1 bazel staff 8059586 Oct 18 08:43 test_table_repack-blocsc-zlib-9.hdf
-rw-r--r-- 1 bazel staff 8161985 Oct 18 09:08 test_table_repack-bzip2-9.hdf
Oddly, recompressing with ptrepack seems to increase total file size (at least in this case using table format with similar compressors).

Order a sequence of dates as they occur in calendar year

I've got a series of pipes to convert dates in a text file into unique, human readable output and pull out MM DD. Now I would like to resort the output so that the dates display in the order in which they occur during the year. Anybody know a good technique using the standard shell or with a readily installable package on *nix?
Feb 4
Feb 5
Feb 6
Feb 7
Feb 8
Jan 1
Jan 10
Jan 11
Jan 12
Jan 13
Jan 2
Jan 25
Jan 26
Jan 27
Jan 28
Jan 29
Jan 3
Jan 30
Jan 31
Jan 4
Jan 5
Jan 6
Jan 7
Jan 8
Jan 9
There is a utility called sort with an option -M for sorting by month. If you have it installed, you could use that. For instance:
sort -k1 -M test.txt
-k1: First column
-M: Sort by month
Edited per twalberg's suggestion below:
sort -k1,1M -k2,2n test.txt
In two steps:
$ while read line; do date -d "$line" "+%Y%m%d"; done < file | sort -n > temp
$ while read line; do date -d "$line" "+%b %d"; done < temp > file
Firstly we convert dates to YYYYMMDD and order them:
$ while read line; do date -d "$line" "+%Y%m%d"; done < file | sort -n > temp
$ cat temp
20130101
20130102
20130103
20130104
20130105
20130106
20130107
20130108
20130109
20130110
20130111
20130112
20130113
20130125
20130126
20130127
20130128
20130129
20130130
20130131
20130204
20130205
20130206
20130207
20130208
Then we print them back to previous format %b %d:
$ while read line; do date -d "$line" "+%b %d"; done < temp > file
$ cat file
Jan 01
Jan 02
Jan 03
Jan 04
Jan 05
Jan 06
Jan 07
Jan 08
Jan 09
Jan 10
Jan 11
Jan 12
Jan 13
Jan 25
Jan 26
Jan 27
Jan 28
Jan 29
Jan 30
Jan 31
Feb 04
Feb 05
Feb 06
Feb 07
Feb 08
and sed -n "1 {
H
x
s/.(\n)./01 Jan\102 Feb\103 Mar\104 Apr\105 May\106 Jun\107 Jul\105 Aug\109 Sep\110 Oct\111 Nov\112 Dec/
x
}
s/^\(.\{3\}\) \([0-9]\) *$/\1 0\2/
H
$ {
x
t subs
: subs
s/^\([0-9]\{2\}\) \([[:alpha:]]\{3\}\)\(\n\)\(.*\)\n\2/\1 \2\3\4\3\1 \2/
t subs
s/^[0-9]\{2\} [[:alpha:]]\{3\}\n//
t subs
p
}
" | sort | sed "s/^[0-9][0-9] //"
still need a sort (or a lot more complex sed for sorting) and when sort -M doesn't work

Cant import psycopg2

I am trying to install postgresql_python.
I downloaded the tarball and installed it using:
python setup.py build
python setup.py install
I got /usr/lib64/python2.4/site-packages/psycopg2/ with
> total 836
> -rw-r--r-- 1 root root 12759 Dec 11 18:18 errorcodes.py
> -rw-r--r-- 1 root root 14584 Dec 12 13:49 errorcodes.pyc
> -rw-r--r-- 1 root root 14584 Dec 12 13:49 errorcodes.pyo
> -rw-r--r-- 1 root root 5807 Dec 11 18:18 extensions.py
> -rw-r--r-- 1 root root 7298 Dec 12 13:49 extensions.pyc
> -rw-r--r-- 1 root root 7298 Dec 12 13:49 extensions.pyo
> -rw-r--r-- 1 root root 31495 Dec 11 18:18 extras.py
> -rw-r--r-- 1 root root 35124 Dec 12 13:49 extras.pyc
> -rw-r--r-- 1 root root 35124 Dec 12 13:49 extras.pyo
> -rw-r--r-- 1 root root 6177 Dec 11 18:18 __init__.py
> -rw-r--r-- 1 root root 5740 Dec 12 13:49 __init__.pyc
> -rw-r--r-- 1 root root 5740 Dec 12 13:49 __init__.pyo
> -rw-r--r-- 1 root root 8855 Dec 11 18:18 pool.py
> -rw-r--r-- 1 root root 8343 Dec 12 13:49 pool.pyc
> -rw-r--r-- 1 root root 8343 Dec 12 13:49 pool.pyo
> -rw-r--r-- 1 root root 3389 Dec 21 11:17 psycopg1.py
> -rw-r--r-- 1 root root 3182 Dec 21 11:22 psycopg1.pyc
> -rw-r--r-- 1 root root 3167 Dec 12 13:49 psycopg1.pyo
> -rwxr-xr-x 1 root root 572648 Dec 21 11:22 _psycopg.so drwxr-xr-x 2 root root 4096 Dec 21 10:38 tests
> -rw-r--r-- 1 root root 4427 Dec 11 18:18 tz.py
> -rw-r--r-- 1 root root 4325 Dec 12 13:49 tz.pyc
> -rw-r--r-- 1 root root 4325 Dec 12 13:49 tz.pyo
But in python shell when I am trying to import library, I got error:
>>> import psycopg2
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/usr/lib64/python2.4/site-packages/psycopg2/__init__.py", line 76, in ?
from psycopg2._psycopg import _connect, apilevel, threadsafety, paramstyle
ImportError: cannot import name _connect
I am running with Postgresql 9.2.
What am I missing here?
Please let me know.
Thanks.
You most likely have to remove some existing packages related to psycopg2 within your root. Some common locations:
rm -r /usr/lib/python2.4/site-packages/psycopg2*
rm -r /usr/local/lib/python2.6/dist-packages/psycopg2*
However, I recommend setting up a virtualenv to house the packages for your Python app.
Check out virtualenv. It's easy to use once installed:
virtualenv myapp
. myapp/bin/activate
cd ~/your/postgres_lib/download
python setup.py install
This will install postgres libraries into your virtualenv (located under the myapp) folder. Then, whenever you want to run your app, you just need to activate the environment via
. myapp/bin/activate
Adjusting the path to myapp when necessary. There are helpers, like virtualenvwrapper to streamline this process.

Categories