I am trying to be Pythonic. Thus, I have this code:
import git
import uuid
repo = git.Repo(...)
u = repo.create_remote(uuid.uuid4(), 'https://github.com/...')
If I wasn't trying to be pythonic, I could do this:
repo.git.push(u.name, refspec, '--force-with-lease')
...but I am trying to be Pythonic. How do I do a --force-with-lease push using this (git.remote.Remote) object?
u.push(refspec, help=needed)
It looks like I can say:
u.push(refspec, force=True)
...but I don't think that uses a lease?
After having a quick look at the source code of GitPython, I would guess that it turns _ into - when interpreting the **kwargs. I think the correct way should be:
u.push(refspec, force_with_lease=True)
Based on: function dashify(), which is called from function transform_kwarg().
Related
I would like to understand some Python code that I've been reading:
my_stream = some.library.method(arg1=val, arg2=val)(input_stream)
My guess is that some.library.method() returns an iterator into which input_stream is passed as an argument. Is this correct?
I have searched "python generator functions" to get documentation on this type of syntax but have found nothing other than nested examples such as: sum(mult(input)). Can anyone provide an explanation or link?
UPDATE
Below is a specific example:
tokenized_train_stream = trax.data.Tokenize(vocab_file=VOCAB_FILE, vocab_dir=VOCAB_DIR)(train_stream)
Is this correct?
If you are unsure about if thing in python is something you might use inspect built-in module, it provides numerous issomething functions, among them isgenerator, simple usage example
import inspect
lst = [1,2,3]
gen = (i for i in [1,2,3])
print(inspect.isgenerator(lst)) # False
print(inspect.isgenerator(gen)) # True
A while a go I was looking for how to rename several columns at once for a PySpark DF and came across something like the following:
import pyspark
def rename_sdf(df, mapper={}, **kwargs_mapper):
# Do something
# return something
pyspark.sql.dataframe.DataFrame.rename = rename_sdf
I am interested in that last bit where a method is added to the pyspark.DataFrame class through an assignment statement.
The thing is, I am creating a Github repo to store all my functions and ETLs and I thought that if I could apply the logic showed above it would be super easy to just create an __init__.py module where I instantiate all my functionalities like:
from funcs import *
pyspark.sql.dataframe.DataFrame.func1 = func1
pyspark.sql.dataframe.DataFrame.func2 = func2
.
.
.
pyspark.sql.dataframe.DataFrame.funcN = funcN
I guess my question is:
Is this useful? Is it good for performance? Is it wrong? Is it un-Pythonic?
That can definitely have its uses in certain scenarios. I would recommend putting this code into a function so the user must explicitly call it.
import funcs
def wrap_pyspark_dataframe():
pyspark.sql.dataframe.DataFrame.func1 = funcs.func1
pyspark.sql.dataframe.DataFrame.func2 = funcs.func2
...
Introduction
Pydev is a great eclipse plugin that let us write python code easily.
It can even give autocompletion suggestion when I do this:
from package.module import Random_Class
x = Random_Class()
x. # the autocompletion will be popped up,
# showing every method & attribute inside Random_Class
That is great !!!
The Problem (And My Question)
However, when I don't use explicit import, and use __import__ for example, I can't have the same autocompletion effect.
import_location = ".".join(('package', 'module'))
__import__(import_location, globals(), locals(), ['*'])
My_Class = getattr(sys.modules[import_location], 'Random_Class')
x = My_Class()
x. # I expect autocompletion, but nothing happened
Question: is there any way (in pydev or any IDE) to make the second one also
show autocompletion?
Why do I need to do this?
Well, I make a simple MVC framework, and I want to provide something like load_model, load_controller, and load_view which is still work with autocompletion (or at least possible to work)
So, instead of leave users do this (although I don't forbid them to do so):
from applications.the_application.models.the_model import The_Model
x = The_Model()
x. # autocompletion works
I want to let users do this:
x = load_model('the_application', 'the_model')()
x. # autocompletion still works
The "applications" part is actually configurable by another script, and I don't want users to change all of their importing model/controller part everytime they change the configuration. Plus, I think load_model, load_controller, and load_view make MVC pattern shown more obvious.
Unexpected Answer
I know some tricks such as doing this (as what people do with
web2py):
import_location = ".".join(('package', 'module'))
__import__(import_location, globals(), locals(), ['*'])
My_Class = getattr(sys.modules[import_location], 'Random_Class')
x = My_Class()
if 0:
from package.module import Random_Class
x = Random_Class()
x. # Now autocompletion is going to work
and I don't expect to do this, since it will only add unnecessary
extra work.
I don't expect any don't try to be clever comments. I have enough of them
I don't expect dynamic import is evil comments. I'm not a purist.
I don't expect any just use django, or pylons, or whatever comments. Such as comments even unrelated to my question.
I have done this before. This may be slightly different from your intended method, so let me know if it doesn't apply.
I dynamically import different modules that all subclass a master class, using similar code to your example. Because the subclassing module already imports the master, I don't need to import it in the main module.
To get highlighting, the solution was to import the master class into the main module first, even though it wasn't used directly. In my case it was a good fallback if the particular subclass didn't exist, but that's an implementation detail.
This only works if your classes all inherit from one parent.
Not really an answer to my own question. However, I can change the approach. So, instead of provide "load_model()", I can use relative import. Something like this:
from ..models.my_model import Model_Class as Great_Model
m = Great_Model()
I'm internationalizing/i18n-ing a django project. We have one part that runs independently and performs background tasks. It's called by rabbitmq. I want to test that the i18n/l10n works for that part. However our app isn't translated yet, and won't be for a while. I want to write the unittests before translation begins.
I'd like to mock some translations, so that _("anything") is returned as a constant string, so that I can test that it's trying to translate things, without me needing to set up all the translations.
I tried using mock, but with mock.patch('django.utils.translations.ugettext_lazy'), my_function_that_just_returns_one_string): didn't work. The _ is imported as from django.utils.translations import ugettext_lazy as _.
You can do the following to replace the ugettext method on the default translation object:
from django.utils.translation.trans_real import get_language, translation
translation(get_language()).ugettext = mock_ugettext
I couldn't find an existing way to do this. However from reading the Django source code I came up with a hacky, brittle way to do this by looking at the _active DjangoTranslation objects, then wrapping their ugettext methods. I've described it here: http://www.technomancy.org/python/django-i18n-test-translation-by-manually-setting-translations/
I've looked at your solution and I think it's both ingenious and simple for testing i18n support when you have no translation strings provided. But I'm afraid the translation package is just something that always works and which we take for granted, so seeing it's internals in heavily commented test code would, at least, make me run off in fear (chuckle).
I think creating a test application, added to INSTALLED_APPS in test settings, which provides it's own translations is a much cleaner approach. Your tests would be simplified to translation.activate('fr'); self.assertEqual('xxx_anything', gettext('anything'), 'i18n support should be activated.').
With simple tests, other developers could quickly follow-up and see that the test app's package contains a /locale directory, which should immediately document your approach.
It seems you're not patching the correct module. If it's your foo/models.py which has the from django.utils.translations import ugettext_lazy as _ statement, then _ is in the namespace of the foo.models module, and this is where you have to patch.
with mock.patch('foo.models._', return_value='MOCKED_TRANSLATION'):
...
or
with mock.patch('foo.models._') as mock_ugettext_lazy:
mock_ugettext_lazy.side_effect = lambda x: x + ' translated'
...
assert translated_text = 'example_text translated'
If you have multiple modules using ugettext_lazy, then you can do it like so:
with mock.patch('foo.models._', side_effect=mock_translator), \
mock.patch('bar._', side_effect=mock_translator):
...
Unfortunately, there is no one-liner to mock it for all modules that use ugettext_lazy, because once the function is imported in your modules, it's pointless to change django.utils.translations.ugettext_lazy -- the original references will keep pointing to the original function.
See https://docs.python.org/3/library/unittest.mock.html#where-to-patch for more.
imagine you have an io heavy function like this:
def getMd5Sum(path):
with open(path) as f:
return md5(f.read()).hexdigest()
Do you think Python is flexible enough to allow code like this (notice the $):
def someGuiCallback(filebutton):
...
path = filebutton.getPath()
md5sum = $getMd5Sum()
showNotification("Md5Sum of file: %s" % md5sum)
...
To be executed something like this:
def someGuiCallback_1(filebutton):
...
path = filebutton.getPath()
Thread(target=someGuiCallback_2, args=(path,)).start()
def someGuiCallback_2(path):
md5sum = getMd5Sum(path)
glib.idle_add(someGuiCallback_3, md5sum)
def someGuiCallback_3(md5sum):
showNotification("Md5Sum of file: %s" % md5sum)
...
(glib.idle_add just pushes a function onto the queue of the main thread)
I've thought about using decoraters, but they don't allow me to access the 'content' of the function after the call. (the showNotification part)
I guess I could write a 'compiler' to change the code before execution, but it doesn't seam like the optimal solution.
Do you have any ideas, on how to do something like the above?
You can use import hooks to achieve this goal...
PEP 302 - New Import Hooks
PEP 369 - Post Import Hooks
... but I'd personally view it as a little bit nasty.
If you want to go down that route though, essentially what you'd be doing is this:
You add an import hook for an extension (eg ".thpy")
That import hook is then responsible for (essentially) passing some valid code as a result of the import.
That valid code is given arguments that effectively relate to the file you're importing.
That means your precompiler can perform whatever transformations you like to the source on the way in.
On the downside:
Whilst using import hooks in this way will work, it will surprise the life out of any maintainer or your code. (Bad idea IMO)
The way you do this relies upon imputil - which has been removed in python 3.0, which means your code written this way has a limited lifetime.
Personally I wouldn't go there, but if you do, there's an issue of the Python Magazine where doing this sort of thing is covered in some detail, and I'd advise getting a back issue of that to read up on it. (Written by Paul McGuire, April 2009 issue, probably available as PDF).
Specifically that uses imputil and pyparsing as it's example, but the principle is the same.
How about something like this:
def performAsync(asyncFunc, notifyFunc):
def threadProc():
retValue = asyncFunc()
glib.idle_add(notifyFunc, retValue)
Thread(target=threadProc).start()
def someGuiCallback(filebutton):
path = filebutton.getPath()
performAsync(
lambda: getMd5Sum(path),
lambda md5sum: showNotification("Md5Sum of file: %s" % md5sum)
)
A bit ugly with the lambdas, but it's simple and probably more readable than using precompiler tricks.
Sure you can access function code (already compiled) from decorator, disassemble and hack it. You can even access the source of module it's defined in and recompile it. But I think this is not necessary. Below is an example using decorated generator, where yield statement serves as a delimiter between synchronous and asynchronous parts:
from threading import Thread
import hashlib
def async(gen):
def func(*args, **kwargs):
it = gen(*args, **kwargs)
result = it.next()
Thread(target=lambda: list(it)).start()
return result
return func
#async
def test(text):
# synchronous part (empty in this example)
yield # Use "yield value" if you need to return meaningful value
# asynchronous part[s]
digest = hashlib.md5(text).hexdigest()
print digest