I'm trying to run a .jar from python, but I get the following error. I need help to solve it.
The python code is:
import jpype
import os.path
jvmPath = jpype.getDefaultJVMPath()
jarPath =os.path.join(os.path.abspath('.'),'C:\Programación\Java\JpypePrueba\dist\JpypePrueba.jar')
dependency = os.path.join(os.path.abspath('.'), "C:\Programación\Java\JpypePrueba")
jpype.startJVM(jvmPath, "-ea", "-Djava.class.path=%s" % jarPath, "-Djava.ext.dirs=%s" % dependency)
JDClass = jpype. JClass("project1.sort")
jd = JDClass()
print (jd.calc(1,2))
jpype.shutdownJVM()
and the java code is:
package project1;
public class sort {
public static void main(String[] args ) {
sort t2 = new sort();
System.out.println ( t2.calc (1, 2)) ;
}
public int calc( int a , int b ) {
return a + b ;
}
}
The error that is generated in python is the following:
runfile('C:/Programación/Java/JpypePrueba/JpypePrueba.py', wdir='C:/Programación/Java/JpypePrueba')
Traceback (most recent call last):
File "C:\Programación\Java\JpypePrueba\JpypePrueba.py", line 17, in <module>
jpype.startJVM(jvmPath, "-ea", "-Djava.class.path=%s" % jarPath, "-Djava.ext.dirs=%s" % dependency)
File "C:\ProgramData\Anaconda3\lib\site-packages\jpype\_core.py", line 166, in startJVM
raise OSError('JVM is already started')
OSError: JVM is already started
The location of my python main code looks like the attached image:
The program should return the sum of 2+1=3.
Related
I am following the SWIG tutorial and I'm currently on the section: "32.9.1 Converting Python list to a char **". The example in question returns a malloc error on my machine:
import example
example.print_args(["a","bc","dc"])
python(57911,0x10bd32e00) malloc: *** error for object 0x7f7ee0406b90: pointer being freed was not allocated
python(57911,0x10bd32e00) malloc: *** set a breakpoint in malloc_error_break to debug
1 57911 abort python
1 57911 abort python
The error is unexpected as this is exactly the code that the tutorial offers. Any help welcome! Thanks in advance
Specs:
MacOS Big Sur
Python 3.8
C++17
Here are my setup.py (the whole archive for reproducibility):
#!/usr/bin/env python
"""
setup.py file for SWIG example
"""
from distutils.core import setup, Extension
import os
import sys
import glob
# gather up all the source files
srcFiles = ['example.i']
includeDirs = []
srcDir = os.path.abspath('src')
for root, dirnames, filenames in os.walk(srcDir):
for dirname in dirnames:
absPath = os.path.join(root, dirname)
globStr = "%s/*.c*" % absPath
files = glob.glob(globStr)
includeDirs.append(absPath)
srcFiles += files
extra_args = ['-stdlib=libc++', '-mmacosx-version-min=10.7', '-std=c++17', '-fno-rtti']
os.environ["CC"] = 'clang++'
#
example_module = Extension('_example',
srcFiles, # + ['example.cpp'], # ['example_wrap.cxx', 'example.cpp'],
include_dirs=includeDirs,
swig_opts=['-c++'],
extra_compile_args=extra_args,
)
setup(name='example',
version='0.1',
author="SWIG Docs",
description="""Simple swig example from docs""",
ext_modules=[example_module],
py_modules=["example"],
)
The example code would work with Python 2, but has a bug as well as a syntax change for Python 3. char** must be passed byte strings, which are the default in Python 2 when using "string" syntax, but need a leading b, e.g. b"string" in Python 3.
This works:
import example
example.print_args([b"a",b"bc",b"dc"])
The crash is due to a bug calling free twice if an incorrect parameter type is found. Make the following change to the example:
if (PyString_Check(o)) {
$1[i] = PyString_AsString(PyList_GetItem($input, i));
} else {
//free($1); // REMOVE THIS FREE
PyErr_SetString(PyExc_TypeError, "list must contain strings");
SWIG_fail;
SWIG_fail; ends up called the freearg typemap, which calls free a second time. With this change, you should see the following if passing incorrect arguments, such as a non-list or Unicode strings instead of byte strings:
>>> import argv
>>> argv.print_args(1)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\argv.py", line 66, in print_args
return _argv.print_args(argv)
TypeError: not a list
>>> argv.print_args(['abc','def'])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\argv.py", line 66, in print_args
return _argv.print_args(argv)
TypeError: list must contain strings
>>> argv.print_args([b'abc',b'def'])
argv[0] = abc
argv[1] = def
2
Changing the error message to "list must contain byte strings" would help as well 😊
I have a set of Java source codes and I need to modify these .java files (remove whitespaces, comments, etc.) For that purpose, I downloaded the Java lexer and parser files from this repository and compiled it using antlr-4.7.2-complete.jar. I also installed the antlr4-python3-runtime using pip.
I tried removing the multiline comment in the sample HelloWorld program with the code below but i got the following traceback. How can I resolve this problem?
For compiling the lexer and parser:
java -jar [path_to_antlr-4.7.2-complete.jar] -Dlanguage=Python3 [path_to_lexer_file]
java -jar [path_to_antlr-4.7.2-complete.jar] -Dlanguage=Python3 [path_to_parser_file]
Sample java file:
public class HelloWorld {
public static void main(String[] args){
/*
System.out.println("Hello World");
*/
}
}
Python code for altering files:
source = open("./HelloWorld.java", "r")
codeStream = InputStream(source.read())
lexer = JavaLexer.JavaLexer(codeStream)
token_stream = CommonTokenStream(lexer)
token_stream.fill()
rewriter = TokenStreamRewriter.TokenStreamRewriter(token_stream)
for token in token_stream.tokens:
if token.type == JavaLexer.JavaLexer.COMMENT:
rewriter.deleteToken(token)
Traceback (most recent call last):
File "/home/alp/PycharmProjects/JavaParsingTutorial/parser.py", line 31, in <module>
rewriter.deleteToken(token)
File "/usr/local/lib/python3.6/dist-packages/antlr4/TokenStreamRewriter.py", line 80, in deleteToken
self.delete(self.DEFAULT_PROGRAM_NAME, token, token)
File "/usr/local/lib/python3.6/dist-packages/antlr4/TokenStreamRewriter.py", line 88, in delete
self.replace(program_name, from_idx, to_idx, None)
File "/usr/local/lib/python3.6/dist-packages/antlr4/TokenStreamRewriter.py", line 71, in replace
if any((from_idx > to_idx, from_idx < 0, to_idx < 0, to_idx >= len(self.tokens.tokens))):
TypeError: '>' not supported between instances of 'CommonToken' and 'CommonToken'
I want to run dask in java process by using jython.
I installed dask[complete] by using pip command.
but, java process raise ImportError: dask
so how I can fix this bug?
package test;
import org.python.core.*;
import org.python.util.*;
public class TestJython {
private static PythonInterpreter pi;
public static void main(String[] args) throws PyException {
pi = new PythonInterpreter();
PySystemState sys = pi.getSystemState();
sys.path.append(new PyString("/usr/local/lib/python2.7/dist-packages"));
pi.exec("import dask.dataframe as dd");
}
}
error log :
Exception in thread "MainThread" Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/dask/dataframe/__init__.py", line 31, in <module>
raise ImportError(str(e) + '\n\n' + msg)
ImportError: Missing required dependencies ['numpy']
Looks like the PythonInterpreter isn't initialized with the correct PYTHONPATH setup. This is not an issue with Dask, but with how you're initializing PythonInterpreter. Looks like you may need to set the python.path system property, or use the JYTHONPATH environment variable: https://www.stefaanlippens.net/jython_and_pythonpath/.
Note that the dask team has no experience running dask in Jython, and cannot guarantee that things will work, or be performant.
I am trying to execute dataflow jar through airflow script. For it i am using DataFlowJavaOperator. In the param jar,i am passing the path of the executable jar file present in the local system.But when i try to run this job i get error as
{gcp_dataflow_hook.py:108} INFO - Start waiting for DataFlow process to complete.
[2017-09-12 16:59:38,225] {models.py:1417} ERROR - DataFlow failed with return code 1
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/airflow/models.py", line 1374, in run
result = task_copy.execute(context=context)
File "/usr/lib/python2.7/site-packages/airflow/contrib/operators/dataflow_operator.py", line 116, in execute
hook.start_java_dataflow(self.task_id, dataflow_options, self.jar)
File "/usr/lib/python2.7/site-packages/airflow/contrib/hooks/gcp_dataflow_hook.py", line 146, in start_java_dataflow
task_id, variables, dataflow, name, ["java", "-jar"])
File "/usr/lib/python2.7/site-packages/airflow/contrib/hooks/gcp_dataflow_hook.py", line 138, in _start_dataflow
_Dataflow(cmd).wait_for_done()
File "/usr/lib/python2.7/site-packages/airflow/contrib/hooks/gcp_dataflow_hook.py", line 119, in wait_for_done
self._proc.returncode))
Exception: DataFlow failed with return code 1`
My airflow script is :
from airflow.contrib.operators.dataflow_operator import DataFlowJavaOperator
from airflow.contrib.hooks.gcs_hook import GoogleCloudStorageHook
from airflow.models import BaseOperator
from airflow.utils.decorators import apply_defaults
from datetime import datetime, timedelta
default_args = {
'owner': 'airflow',
'start_date': datetime(2017, 03, 16),
'email': [<EmailID>],
'dataflow_default_options': {
'project': '<ProjectId>',
# 'zone': 'europe-west1-d', (i am not sure what should i pass here)
'stagingLocation': 'gs://spark_3/staging/'
}
}
dag = DAG('Dataflow',schedule_interval=timedelta(minutes=2),
default_args=default_args)
dataflow1 = DataFlowJavaOperator(
task_id='dataflow_example',
jar ='/root/airflow_scripts/csvwriter.jar',
gcp_conn_id = 'GCP_smoke',
dag=dag)
I am not sure what mistake i am making ,Can anybody please help me to get out of this
Note :I am creating this jar while selecting option as Runnable JAR file by packaging all the external dependencies.
The problem was with the jar that I was using. Before using the jar, Make sure that the jar is executing as expected.
Example:
If your jar was dataflow_job1.jar, Execute the jar using
java -jar dataflow_job_1.jar --parameters_if_any
Once your jar runs successfully, Proceed with using the jar in Airflow DataflowJavaOperator jar.
Furthermore,
If you encounter errors related to Coders, you may have to make your own coder to execute the code.
For instance, I had a problem with TableRow class as it didnot have a default coder and thus i had to make this up:
TableRowCoder :
public class TableRowCoder extends Coder<TableRow> {
private static final long serialVersionUID = 1L;
private static final Coder<TableRow> tableRow = TableRowJsonCoder.of();
#Override
public void encode(TableRow value, OutputStream outStream) throws CoderException, IOException {
tableRow.encode(value, outStream);
}
#Override
public TableRow decode(InputStream inStream) throws CoderException, IOException {
return new TableRow().set("F1", tableRow.decode(inStream));
}
#Override
public List<? extends Coder<?>> getCoderArguments() {
// TODO Auto-generated method stub
return null;
}
#Override
public void verifyDeterministic() throws org.apache.beam.sdk.coders.Coder.NonDeterministicException {
}
}
Then Register this coder in your code using
pipeline.getCoderRegistry().registerCoderForClass(TableRow.class, new TableRowCoder())
If there are still errors(which are not related to coders) Navigate to:
*.jar\META-INF\services\FileSystemRegistrar
and add any dependencies that may occur.
For example there might be a staging error as:
Unable to find registrar for gs
i had to add the following line to make it work.
org.apache.beam.sdk.extensions.gcp.storage.GcsFileSystemRegistrar
OK, so after searching a little too hard over the internet I still have my same issue. I have a very simple python script that opens up the specified excel file and then runs a macro.
I know for a fact that my python script runs as it should stand alone.
I know for a fact that my C++ code runs as it should.
But the combo of both creates a 'com_error'. Just so anyone who sees this knows, these are all the tests I have ran:
(1) simple python script (just prints hello) --> passed
(2) use C++ to run same simple .py script --> passed
(3) more advanecd python script (opens excel, runs macro, save and close) --> pass
(4) usc C++ code to run advanced .py script --> fail.
And there is my problem. this has something to do with the win32com.client and an error the server throws because it cant find the file location (but trust me it can because it passed the 'find file' test)
I'm running Windows7, Python 2.7, And the latest version of JetBrains Clion (2017.1.2).
Any help would be so appreciated. Thanks! Happy coding.
C++ code:
#include <iostream>
#include <Windows.h>
using namespace std;
int main() {
const char *cmd = "python C:\\Users\\Alex.Valente\\Desktop\\python.py";
PROCESS_INFORMATION processInformation = {0};
STARTUPINFO startupInfo = {0};
startupInfo.cb = sizeof(startupInfo);
BOOL result = CreateProcess(NULL, (LPSTR)cmd,
NULL, NULL, FALSE,
NORMAL_PRIORITY_CLASS,
GetEnvironmentStrings(), NULL, &startupInfo, &processInformation);
if(!result){
return -1;
}
WaitForSingleObject( processInformation.hProcess, INFINITE );
return 0;
}
Python Script:
from __future__ import print_function
import unittest
import os.path
import win32com.client
import os
class ExcelMacro(unittest.TestCase):
def test_excel_macro(self):
xlApp = win32com.client.DispatchEx('Excel.Application')
xlsPath = r'C:\Users\Alex.Valente\Desktop\Data.csv'
xlApp.Visible = True
wb = xlApp.Workbooks.Open(Filename=xlsPath)
xlApp.Run("PERSONAL.XLSB!PythonTest")
wb.Save()
wb.Close()
xlApp.Quit()
print("Macro ran successfully!")
if __name__ == "__main__":
unittest.main()
And the Error that is printed after I run it:
======================================================================
ERROR: test_excel_macro (__main__.ExcelMacro)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:\Users\Alex.Valente\Desktop\python.py", line 25, in test_excel_macro
wb = xlApp.Workbooks.Open(Filename=xlsPath)
File "<COMObject <unknown>>", line 8, in Open
com_error: (-2147417851, 'The server threw an exception.', None, None)
----------------------------------------------------------------------
Ran 1 test in 6.305s
FAILED (errors=1)