I'm trying to use antlr on one of the grammars here (specifically, java 8):
$ antlr4 -Dlanguage=Python3 grammars-v4/java/java8/Java8Lexer.g4
$ antlr4 -Dlanguage=Python3 grammars-v4/java/java8/Java8Parser.g4
This step appears to go smooth, and when I inspect the directory:
$ ls -l grammars-v4/java/java8/*.py
-rw-r--r-- 1 root root 56583 Nov 30 14:32 grammars-v4/java/java8/Java8Lexer.py
-rw-r--r-- 1 root root 798326 Nov 30 14:32 grammars-v4/java/java8/Java8Parser.py
-rw-r--r-- 1 root root 79928 Nov 30 14:32 grammars-v4/java/java8/Java8ParserListener.py
everything is there. Still, when I try to use the hello world example:
from antlr4 import *
from Java8Lexer import Java8Lexer
from Java8Parser import Java8Parser
input_stream = FileStream("/main.java")
lexer = Java8Lexer(input_stream)
stream = CommonTokenStream(lexer)
parser = Java8Parser(stream)
tree = parser.startRule()
I get an error:
AttributeError: 'Java8Parser' object has no attribute 'startRule'
The parser method(s), startRule in your case, correspond to the parser rules defined in the .g4 grammar.
Look into the Java grammar, there is a parser rule with EOF in it called compilationUnit. Use that instead:
tree = parser.compilationUnit()
Related
I want to print a command until it finds the main.py file and then stops.
I tried this code but according to logic it is printing the code several times and not line by line until I stop at mine where I find the main.py file.
import subprocess
#store ls -l to variable
get_ls = subprocess.getoutput("ls -l")
#transfom output to string
ls = str(get_ls)
#search for main.py file in ls
for line in ls:
main_py = line.find('main.py')
print(ls)
#if find main.py print stop and exit
if main_py == 'main.py':
print('stop...')
exit()
Output is looping this:
-rw-r--r-- 1 runner runner 9009 Feb 19 19:00 poetry.lock
-rw-r--r-- 1 runner runner 354 Feb 19 19:00 pyproject.toml
-rw-r--r-- 1 runner runner 329 Feb 25 00:10 main.py
-rw-r--r-- 1 runner runner 383 Feb 14 17:57 replit.nix
-rw-r--r-- 1 runner runner 61 Feb 19 18:46 urls.tmp
drwxr-xr-x 1 runner runner 56 Oct 26 20:53 venv
I want this output:
-rw-r--r-- 1 runner runner 9009 Feb 19 19:00 poetry.lock
-rw-r--r-- 1 runner runner 354 Feb 19 19:00 pyproject.toml
-rw-r--r-- 1 runner runner 329 Feb 25 00:10 main.py
###### stops here #######
How to fix this?
The line for line in ls isn't doing what you think it is. Instead of going line by line, it's going through ls character by character. What you want to have is for line in ls.splitlines(). You can then check if main.py is on that line by calling "main.py" in line
import subprocess
#store ls -l to variable
get_ls = subprocess.getoutput("ls -l")
#transfom output to string
ls = str(get_ls)
#search for main.py file in ls
for line in ls.splitlines():
print(line)
#if find main.py print stop and exit
if "main.py" in line:
print('stop...')
exit()
That should be more what you want I think.
You're also printing ls every loop, which you need to change to only print the current line
In my opinion, if you only want to achieve the result and don’t mind changing your logic, this the most elegant, and which is the most "pythonic" one. I like the simplicity of the os.walk() method:
import os
for root, dirs, files in os.walk("."):
for filename in files:
print(filename)
if filename == "main.py":
print("stop")
break
Currently I am experience issues with the script automatic run after wifi adapter connects to a network.
After ridiculously extended research, I've made several attempts to add script to a /etc/network/if-up.d/. Manually my script works; however it does not automatically.
User permissions:
ls -al /etc/network/if-up.d/*
-rwxr-xr-x 1 root root 703 Jul 25 2011 /etc/network/if-up.d/000resolvconf
-rwxr-xr-x 1 root root 484 Apr 13 2015 /etc/network/if-up.d/avahi-daemon
-rwxr-xr-x 1 root root 4958 Apr 6 2015 /etc/network/if-up.d/mountnfs
-rwxr-xr-x 1 root root 945 Apr 14 2016 /etc/network/if-up.d/openssh-server
-rwxr-xr-x 1 root root 48 Apr 26 03:21 /etc/network/if-up.d/sendemail
-rwxr-xr-x 1 root root 1483 Jan 6 2013 /etc/network/if-up.d/upstart
lrwxrwxrwx 1 root root 32 Sep 17 2016 /etc/network/if-up.d/wpasupplicant -> ../../wpa_supplicant/ifupdown.sh
Also, I've tried to push the command directly in /etc/network/interfaces
by adding a row
post-up /home/pi/r/sendemail.sh
Contents of sendemail.sh:
#!/bin/sh
python /home/pi/r/pip.py
After the reboot, nothing actually happen. I've even tried sudo in front
I assume that wpasupplicant is the thing which causes that, but I cannot get how to run my script in ifupdown.sh script under /etc/wpa_supplicant.
Appreciate your help!
If you have no connectivity prior to initializing the wifi interface, I would suggest adding a cron job of a bash or python script that checks for connectivity every X minutes.
Ping (host);
If host is up then run python commands or external command.
This is rather ambiguous but hopefully is of some help.
Here is an example of a script that will check if a host is alive;
import re,commands
class CheckAlive:
def __init__(self):
myCommand = commands.getstatusoutput('ping ' + 'google.com)
searchString = r'ping: unknown host'
match = re.search(searchString,str(myCommand))
if match:
# host is not alive
print 'no alive, don't do stuff';
else:
# host is alive
print 'alive, time do stuff';
I wrote a Python script that will run indefinitely. It monitors a directory using PyInotify and uses the Multiprocessing module to run any new files created in those directories through an external script. That all works great.
The problem I am having is writing the output to a file. The filename I chose uses the current date (using datetime.now) and should, theoretically, roll on the hour, every hour.
now = datetime.now()
filename = "/data/db/meta/%s-%s-%s-%s.gz" % (now.year, now.month, now.day, now.hour)
with gzip.open(filename, 'ab') as f:
f.write(json.dumps(data) + "\n")
f.close() #Unsure if I need this, here for debug
Unfortunately, when the hour rolls on -- the output stops and never returns. No exceptions are thrown, it just stops working.
total 2.4M
drwxrwxr-x 2 root root 4.0K Sep 8 08:01 .
drwxrwxr-x 4 root root 12K Aug 29 16:04 ..
-rw-r--r-- 1 root root 446K Aug 29 16:59 2016-8-29-16.gz
-rw-r--r-- 1 root root 533K Aug 30 08:59 2016-8-30-8.gz
-rw-r--r-- 1 root root 38K Sep 7 10:59 2016-9-7-10.gz
-rw-r--r-- 1 root root 95K Sep 7 14:59 2016-9-7-14.gz
-rw-r--r-- 1 root root 292K Sep 7 15:59 2016-9-7-15.gz #Manually run
-rw-r--r-- 1 root root 834K Sep 8 08:59 2016-9-8-8.gz
Those files aren't really owned by root, just changed them for public consumption
As you can see, all of the files timestamps end at :59 and the next hour never happens.
Is there something that I should take into consideration when doing this? Is there something that I am missing running a Python script indefinitely?
After taking a peek. It seems as if PyInotify was my problem.
See here (https://unix.stackexchange.com/questions/164794/why-doesnt-inotifywatch-detect-changes-on-added-files)
I adjusted your code to change the file name each minute, which speeds up debugging quite a bit and yet still tests the hypothesis.
import datetime
import gzip, time
from os.path import expanduser
while True:
now = datetime.datetime.now()
filename = expanduser("~")+"/%s-%s-%s-%s-%s.gz" % (now.year, now.month, now.day, now.hour, now.minute)
with gzip.open(filename, 'a') as f:
f.write(str(now) + "\n")
f.write("Data Dump here" + "\n")
time.sleep(10)
This seems to run without an issue. Changing the time-zone of my pc was also picked up and dealt with. I would suspect, given the above, that your error may lie elsewhere and some judicious debug printing of values at key points is needed. Try using a more granular file name as above to speed up the debugging.
I have a test which requires instructions on how to run. The goal beyond working is to be noobproof, the instruction manual should consist of one command to run file, one to run the test. My friend said running unittests won't require files being on pythonpath because it checks current directory first, but I get:
import unittest
from ordoro_test.main import OrdoroETLMachine
class ETLMachineTests(unittest.TestCase):
def setUp(self):
self.api_url = 'https://9g9xhayrh5.execute-api.us-west-2.amazonaws.com/test/data'
self.headers = {'accept': 'application/json'}
def test_data_is_returned(self):
print(OrdoroETLMachine.get_email_data())
if __name__ == '__main__':
unittest.main()
cchilders:~/ordoro_test [master]$ python test.py
Traceback (most recent call last):
File "test.py", line 4, in <module>
from ordoro_test.main import OrdoroETLMachine
ImportError: No module named ordoro_test.main
cchilders:~/ordoro_test [master]$ ls -l
total 8
-rw-rw-r-- 1 cchilders cchilders 0 Mar 5 19:15 __init__.py
-rwxr-xr-x 1 cchilders cchilders 3099 Mar 5 20:12 main.py
-rwxr-xr-x 1 cchilders cchilders 441 Mar 5 20:19 test.py
How can I fix and allow imports the simplest way possible? Thank you
I try
from ordoro_test.assignment.main import OrdoroETLMachine
No dice
Adding empty __init__.py file in one level with ordoro_testfolder shall fix your problem.
See Python 2.7, Modules section for more information.
Use explicit relative import from .main import OrdoroETLMachine. intra-package-references
I'm having some trouble configuring nginx to work with Python3.2. I'm also struggling to find anything resembling a decent tutorial on the subject. I did however find a decent tutorial on getting nginx to play nice with Python2.7. My thought process was that since uwsgi works with plugins it should be a relatively simple exercise to follow the Python2.7 tutorial and just swap out the python plugin.
Here is the tutorial I followed to get a basic Hello World site working: https://library.linode.com/web-servers/nginx/python-uwsgi/ubuntu-12.04-precise-pangolin
/etc/uwsgi/apps_available/my_site_url.xml looks like:
<uwsgi>
<plugin>python</plugin>
<socket>/run/uwsgi/app/my_site_urlmy_site_url.socket</socket>
<pythonpath>/srv/www/my_site_url/application/</pythonpath>
<app mountpoint="/">
<script>wsgi_configuration_module</script>
</app>
<master/>
<processes>4</processes>
<harakiri>60</harakiri>
<reload-mercy>8</reload-mercy>
<cpu-affinity>1</cpu-affinity>
<stats>/tmp/stats.socket</stats>
<max-requests>2000</max-requests>
<limit-as>512</limit-as>
<reload-on-as>256</reload-on-as>
<reload-on-rss>192</reload-on-rss>
<no-orphans/>
<vacuum/>
</uwsgi>
Once everything was working installed uwsgi-plugin-python3 via apt-get. ls -l /usr/lib/uwsgi/plugins/ now outputs:
-rw-r--r-- 1 root root 142936 Jul 17 2012 python27_plugin.so
-rw-r--r-- 1 root root 147192 Jul 17 2012 python32_plugin.so
lrwxrwxrwx 1 root root 38 May 17 11:44 python3_plugin.so -> /etc/alternatives/uwsgi-plugin-python3
lrwxrwxrwx 1 root root 37 May 18 12:14 python_plugin.so -> /etc/alternatives/uwsgi-plugin-python
Changing python to python3 or python32 in my_site_url.xml has the same effect, ie:
The hello world page takes ages to load (it was effectively instantanious before) and then comes up blank
My site's access log records access
my site's error log records no new error
/var/log/uwsgi/app/my_site_url.log records the following:
[pid: 4503|app: 0|req: 1/2] 192.168.1.5 () {42 vars in 630 bytes} [Sun May 19 10:49:12 2013] GET / => generated 0 bytes in 0 msecs (HTTP/1.1 200) 2 headers in 65 bytes (1 switches on core 0)
so My question is:
How can I correctly configure this app to work on Python3.2
The listed tutorial has the following application code:
def application(environ, start_response):
status = '200 OK'
output = 'Hello World!'
response_headers = [('Content-type', 'text/plain'),
('Content-Length', str(len(output)))]
start_response(status, response_headers)
return [output]
This is incompatable with python3.2 because it expects a bytes object. Replacing the application function with the following fixes things:
def application(env, start_response):
start_response('200 OK', [('Content-Type','text/html')])
return b"Hello World"