I have been using mercurial for about a year and haven't had any issues.
Today I ran into an issue for the first time.
When I try to push to the remote server with
$ hg push
I get the following response
searching for changes
remote: abort: No space left on device
abort: unexpected response: empty string
I googled this issue and found that it is a documented issue, and I found the following excerpt from Mercurial FAQ:
4.28. I get a "no space left" or "disk quota exceeded" on push
I get a "no space left" or "disk quota exceeded" on push, but there is plenty of space or/and I have no quota limit on the device where the remote hg repository is.
The problem comes probably from the fact that mercurial uses /tmp (or one of the directory define by environment variables $TMPDIR, $TEMP or $TMP) to uncompress the bundle received on the wire. The decompression may then reach device limits.
You can of course set $TMPDIR to another location on remote in the default shell configuration file, but it will be potentially used by other processes than mercurial. Another solution is to set a hook in a global .hgrc on remote. See the description of how to set a hook for changing tmp directory on remote when pushing.
I have created the hook in my /etc/mercurial/hgrc file that looks like this
[hooks]
pre-serve.tmpdir = python:hgenviron.settmpdir
and then I am supposed to create hgenviron.py
import os
#see http://docs.python.org/lib/module-tempfile.html
def settmpdir(ui, repo, hooktype, node=None, source=None, **kwargs):
os.environ["TMPDIR"] = "/home/tmp"
The problem I am having is that I don't know how to add this file to $PYTHONPATH in fedora
My operating system is Fedora 12 x86_64
I have python 2.6
I have mercurial 1.6.4
UPDATE:
I just added hgenviron.py to /usr/lib/python2.6/site-packages/hg/hgenviron.py and
PYTHONPATH=$PYTHONPATH:/usr/lib/python2.6/site-packages/hg/hgenviron.py
export PYTHONPATH
to a .sh file in /etc/profile.d, along with the hook in /etc/mercurial/.
However I still get the error:
remote: abort: pre-serve.tmpdir hook is invalid (import of "hgenviron" failed) abort:
no suitable response from remote hg!
The problem is using the wrong import statement. It should be from hg import hgenviron
For setting PYTHONPATH Depends on how/where you want to add it.
In /etc/profile.d you can find a set of scripts that are ran when bash is loaded. /etc/profile is the global file, which calls the scripts and has this comment:
# System wide environment and startup programs, for login setup
# Functions and aliases go in /etc/bashrc
# It's NOT good idea to change this file unless you know what you
# are doing. Much better way is to create custom.sh shell script in
# /etc/profile.d/ to make custom changes to environment. This will
# prevent need for merging in future updates.
/etc/profile is ran when the bash environment is loaded. Locally to the user, you edit ~/.bash_profile or ~/.bashrc (if they doesn't exist, you can create them). These scripts are ran when the specific user logs in. You should examine these files in detail to understand how the environment is created and setup.
You would add something like this:
PYTHONPATH=/home/tmp:$PYTHONPATH
export PYTHONPATH
If you're struggling to get to the bottom of the PYTHONPATH issue, you can specifically state the location of the hgenviron.py file.
pre-serve.tmpdir = python:/var/hg/hgenviron.py:settmpdir
Note that settmpdir is then called with a : rather than . in the original example.
Related
I'm using Cython in --embed mode to produce a .exe. I'm evaluating the Minimal set of files required to distribute an embed-Cython-compiled code and make it work on any machine. To do this, I only copy a minimal number of files from the Python Windows embeddable package.
In order to check this, I need to be sure that the current process I'm testing doesn't in fact use my system default Python install, i.e. C:\Python38.
To do this, I open a new cmd.exe and do set PATH= which temporarily removes everything from the PATH. Then I can test any self-compiled app.exe and make sure it doesn't reuse C:\Python38's files under the hood.
It works, except for the modules. Even after doing set PATH=, my code app.py
import json
print(json.dumps({"a":"b"}))
when Cython---embed-compiled into a .exe works, but it still uses C:\Python38\Lib\json\__init__.py! I know this for sure, because if I temporarily remove this file, my .exe now fails, because it cannot find the json module.
How to completely remove any link to C:\Python38 when debugging a Python program which shouldn't use these files?
Why isn't set PATH= enough? Which other environment variable does it use for modules? I checked all my system variables and I think I don't find any which seems related to Python.
Python has a quite complicated heuristic for finding its "installation" (see for example this SO-question or this description), so probably it doesn't find the installation you are providing but the "default" installation.
Probably the most simple way is to set the environment variable PYTHONPATH pointing to the desired installation prior to start of the embedded interpreter.
By examination of sys.path one can check whether the correct installation was found.
Thanks to #ead's answer and his link getpath.c finally redirecting to getpathp.c in the case of Windows, we can learn that the rule for building the path for module etc. is:
current directory first
PYTHONPATH env. variable
registry key HKEY_LOCAL_MACHINE\SOFTWARE\Python or the same in HKCU
PYTHONHOME env. variable
finally:
Iff - we can not locate the Python Home, have not had a PYTHONPATH
specified, and can't locate any Registry entries (ie, we have nothing
we can assume is a good path), a default path with relative entries is
used (eg. .\Lib;.\DLLs, etc)
Conclusion: in order to debug an embedded version of Python, without interfering with the default system install (C:\Python38 in my case), I finally solved it by temporarily renaming the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Python to HKEY_LOCAL_MACHINE\SOFTWARE\PythonOld.
Side note: I'm not sure I will ever revert this registry key back to normal: my normal Python install shouldn't need it anyway to find its path, since when I run python.exe from anywhere (it is in the PATH for everyday use), it will automatically look in .\Lib\ and .\DLL\ which is correct. I don't see a single use case in which my normal install python.exe wouldn't find its subdir .\Lib\ or .\DLL\ and requiring the registry for this. In which use case would the registry be necessary? if python.exe is started then its path has been found, and it can take its .\Lib subfolder, without help from registry. I think 99,99% of the time this registry feature is doing more harm than good, preventing a Python install to be really "portable" (i.e. that we can move from one folder to another).
Notes:
To be 100% sure, I also did this in command line, but I don't think it's necessary:
set PATH=
set PYTHONPATH=
set PYTHONHOME=
Might be helpful to do debugging of an embedded Python: import ctypes. If you haven't _ctypes.pyd and libffi-7.dll in your embedded install folder, it should fail. If it doesn't, this means it looks somewhere else (probably in your default system-wide Python install).
I'm trying to add a git clean-filter in order to ignore outputs and execution_count from my IPython notebook files.
I've basically followed this article (based on this SO answer) and modified it slightly for my needs. Also, I'm on Windows so the part about making the python script executable isn't needed as far as I know (see Python FAQ).
I want to bundle that to my repository so that other contributors get it too.
I've saved the ipynb_drop_output.py at the root of my repository and saved the gitconfig and gitattributes files at the same place so at the root I have:
.gitignore
.gitconfig
.gitattributes
ipynb_drop_output.py
MyNotebook.ipynb
In .gitattributes:
*.ipynb filter=clean_ipynb
In .gitconfig:
[filter "clean_ipynb"]
clean = ipynb_drop_output.py
smudge = cat
I've tested the code of my ipynb_drop_output manually and it works like a charm. Yet git diff still shows me execution_count and outputs that changed. It appears the script isn't running at all.
I'm thinking it might be because of the clean = ipynb_drop_output.py section, but I've tried every variation: not include the .py, include the full path "C...\ipynb_drop_output.py", with forward slashes too etc.
My second theory is that git is just not looking at the .gitconfig file but I'm unclear how to tell it to and/or how to check that it is actually looking at it. And I thought the point of git config --file .gitconfig filter.clean_ipynb.clean ipynb_drop_output was to do just this...
How can I make it work on Windows please?
Let's assume that you have your repository checked out under: ~/myrepo/.
You need to tell git where to find your repo-wide custom .gitconfig, that you want all your users to use. You do that by running:
cd ~/myrepo/
git config --local include.path ../.gitconfig
note ../, which is missing from your attempts to make this work, as .gitconfig and .git/config are not in the same directory. The layout of your ~/myrepo/ will have:
.git/config
.gitconfig
.gitattributes
You will need the last 2 files committed to your repo.
All your users will have to execute the git config command from above right after cloning your repo, to tell git to trust ~/myrepo/.gitconfig. It's not possible to do it on their behalf for security reasons.
Finally, the reason your manual incorrect configuration was silently failing, is because git has been designed this way, to allow for optional configuration files. So as of this writing if in your .git/config you have:
[include]
path = ../.gitconfig
and ../.gitconfig is not there, git will silently skip over it. Therefore, if you typed a wrong path, it will skip over it.
There is a new development on this front, and the jury is still out. Hopefully there will be a way to diagnose such git issues in the future.
With Fileconveyor limited documentation I'm confused as to where it installs after I've run the pip command as follows on their website Fileconveyor.org.
Bottom line: Anyone have luck installing Fileconveyor on Debian 6 for integration with Drupal 6 and the CND Module?
I can't figure out where to put my settings.xml file.
Thanks,
Curtis
The documentation does give indication of where things are put, but it isn't entirely clear in that we expect an "installation" to move things to certain destinations, such as /usr/bin. In reality, Fileconveyor is installed in the very same directory as wherever the git clone placed it.
The settings file (which must be cp from a file named "config.sample.xml") is in a folder 'conveyor' within the main 'conveyor' folder.
The link where you can read about this is https://github.com/wimleers/fileconveyor
It reads in part: "The sample configuration file (config.sample.xml) should be self explanatory. Copy this file to config.xml, which is the file File Conveyor will look for,
and edit it to suit your needs."
Starting it doesn't actually invoke any command with the name 'fileconveyor', which I previously mentioned is what one might expect from a typical installation. Another instruction from the above link reads:
"Starting File Conveyor
File Conveyor must be started by starting its arbitrator (which links
everything together; it controls the file system monitor, the processor
chains, the transporters and so on). You can start the arbitrator like this:
python /path/to/fileconveyor/arbitrator.py"
In my case the command is 'python ~/src/conveyor/conveyor/arbitrator.py'
In retrospect I might reinstall in another directory in case I ever empty my ~/src folder which is the folder I use to initially download items to compile and install, then clean. I wasn't expecting it to end up being the installation folder for Fileconveyor.
Hope this helps.
Using the Google App Engine to develop in python yesterday it stopped running the current version of the script.
Instead of executing the most recent version it seems to run the previously pre-compiled .pyc even if the .py source was changed.
Error messages actually quotes the correct line from the most current source. Except if the position of the line changed, then it quotes the line which is in the place where the error occurred previously.
Deleting .pyc files causes them to be recreated from the current version. Deleting all .pycs is a poor workaround for now.
How can I get to the root cause of the problem?
Did you check your system clock? I believe python determine whether to use the .pyc or .py based on timestamps. If your system clock got pushed back, then it would see the .pyc files as newer until the system clock caught up to the last time they were built.
Are you editing the .py files on a different system than where they are being compiled ?
The compiler recompiles the .py files if its modification date is newer than the modification date of the .pyc file.
The fact that it is picking the .pyc file for use points to the fact that your .py file has an older modification date. This is only possible if your .py file is being modified on a different system and then being copied to the one where it is to be used and the editing environment/system's clock is set behind the runtime enviroment/system's clock.
The following steps solved the issue temporarily:
Delete GoogleAppEngineLauncher from your Applications folder.
Rename the file ~/Library/Application Support/GoogleAppEngineLauncher/Projects.plist (e.g. Project.plist.backup
Rename the file ~/Library/Preferences/com.google.GoogleAppEngineLauncher.plist (e.g. com.google.GoogleAppEngineLauncher.plist.backup)
Download and install Google App Engine Launcher again.
Use "File", "Add existing application…" to add your projects again, do not forget to set any flags you had set before.
Alternatively it might even work starting GAEL once, closing it and putting your backed up preference files back into place as to avoid having to reconfigure.
Edit: Turns out that fixes it… temporarily. Not exactly a very easy issue to debug.
Weirdly enough it works when running the appserver from the command line, such as
dev_appserver.py testproject/ -p 8082 --debug
I am using Bazaar v2.0.1 on Max OS X 10.6.2
When I perform a commit after moving a large number of files/directories (over 10,000) I get the following error message:
bzr: ERROR: [Errno 24] open: Too many
open files: '.'
My first work-around was to break the commit up into several sub-sets. However, this is not ideal and I'm afraid there may be a point where one change (that cannot be broken up into sub-sets) will give me the same error.
[Update]
After doing some research this is what I have found:
It looks like:
Errno 24 "open: Too many open files"
is a Python error.
According to this blog post, the limit on the number of files open can be changed from within a Python script with resource.setrlimit. However, I was really looking for a way to change the default value so Bazaar would automatically run with a higher value (BTW, it looks like my default setting was 2560).
According to the apple documentation for the setrlimit system call there is a sh built-in command called ulimit which can be used to change the setting. Any process started from the shell would then inherit this value.
My current work-around is to add ulimit -n 10240 to ~/.profile. This way when I run bzr commit from the shell it will be able to open 10240 files. I selected 10240 files because this is the maximum allowed for a user process in Mac OS X.
It doesn't seem like Bazaar should need that many files open at once. I am worried that if I ever move more files that this may come back to bite me again. Is this a bug in Bazaar? Is there anything else I can do?
You can use lsof to see all open files. You might try grepping for the pid of the bazaar process, or monitoring the number of open files.
Note that you may or may not need to be root to see all files / processes relevant for your situation.
Try ulimit -n 1024 (or more) before running bazaar, if your shell supports it (it's a bash builtin).
Jinx! edit: you can put it in your ~/.profile if there is one, or ~/.bash_profile.