Communicating from node to python using json gives me error - python

I'm trying to communicate to python from node using PythonShell. When I set the mode to json, I get an error
WARNING: Logging before flag parsing goes to stderr.
^
SyntaxError: Unexpected token W in JSON at position 0
My python file so far contains the following
import json
import random
import tensorflow
import tflearn
import numpy
import sys
import pickle
import nltk
from nltk.stem.lancaster import LancasterStemmer
stemmer = LancasterStemmer()
nltk.download('punkt')
This is how I'm calling the python file from node:
const options = {
mode: 'json',
pythonOptions: ['-u'],
pythonPath: 'python'
};
let pyshell = new PythonShell('./python/script.py', options);
pyshell.on('message', async function(message) {
autoResponseHandler(message);
});
What am I doing wrong, and how can I set the mode to json correctly?

Answer:
In your python script, you need to return a message of the json-form '{ ex: "something"}'.
I suspect you're attempting to pass a plain string as the message to pyshell.on(). Before you pass 'message', you need to convert it to JSON format.
So when I run the following code:
import json
import random
import tensorflow
import tflearn
import numpy
import sys
import pickle
import nltk
from nltk.stem.lancaster import LancasterStemmer
stemmer = LancasterStemmer()
message = nltk.download('punkt')
print(message)
The output is 'true'... What is your intention here?

Related

Importing transformers, evaluate, or torch python library and Internal Server Error 500

It seems that when I call a python function from an xmlhttp request so that its output prints to the browser, the script will lead to an error if any of these import statements (commented) are run:
import traceback
import transformers
"""
from transformers import AutoTokenizer, AutoModel, AutoModelForSequenceClassification, TrainingArguments, Trainer
import evaluate
import torch
"""
import pandas as pd
import numpy as np
import datasets
from datasets import load_dataset, load_metric, load_from_disk, Value
import cgi, cgitb
print("Content-Type: text/html\n")
print("Hello!")
That is, if I keep them commented, my hello statement will show on the browser, otherwise, I get an internal server error 500. Any help would be greatly appreciated!!
I already tried suppressing warnings and logs!

How do I import firebase package into Thonny? Python Version 3.7.9

When I try to import firebase in Thonny I constantly get the error
Traceback (most recent call last):
File "C:\Users\jackl\brhyu.py", line 1, in <module>
import firebase
File "C:\Users\jackl\AppData\Roaming\Python\Python37\site-packages\firebase\__init__.py", line 3
from .async import process_pool
^
SyntaxError: invalid syntax
Any ideas what's wrong? My code is literally just:
import firebase
I have tried commenting the ".async" part out which didn't work.
I have also tried replacing the code in the backend such as replacing
import atexit
from .async import process_pool
from firebase import *
with
import atexit
from .async import process_pool
from .multiprocess_pool import process_pool
from firebase import *
and then replacing
from .firebase_token_generator import FirebaseTokenGenerator
from .decorators import http_connection
from .async import process_pool
from .jsonutil import JSONEncoder
__all__ = ['FirebaseAuthentication', 'FirebaseApplication']
with
import json
from .firebase_token_generator import FirebaseTokenGenerator
from .decorators import http_connection
from .async import process_pool
from .multiprocess_pool import process_pool
from .jsonutil import JSONEncoder
I'm completely stumped...
Any help is greatly appreciated, I need this for a big exam

how to call a module in a function in pp where that fuction has other functions in it?

I'm currently using parallel python ,and in the parameters of job_server.submit i added the library in modules but the problem is that even that library has other librairies in it .so what should i do ?
Here is the code i'm trying to run:
from tools.demo import detect_cn
import pp
job_server = pp.Server()
f1 = job_server.submit(detect_cn, (filename,),modules=('tools.demo',))
f2 = job_server.submit(detect_cn, (filename1,),modules=('tools.demo',))
cnis, preproc_time, roi_file_images=f1()
cnis1, preproc_time1, roi_file_images1=f2()
and this is part of code of demo.py
import _init_paths
from fast_rcnn.config import cfg
from fast_rcnn.test import im_detect
from fast_rcnn.nms_wrapper import nms
from utils.timer import Timer
from ocr.clstm import clstm_ocr
from ocr.clstm import clstm_ocr_calib
import matplotlib.pyplot as plt
import numpy as np
import scipy.io as sio
import caffe, os, sys, cv2
import argparse
import werkzeug
import datetime
import math
import pytesseract
from PIL import Image
def detect_cn(filename):
cfg.TEST.HAS_RPN = True # Use RPN for proposals
args = parse_args()
prototxt = os.path.join(cfg.MODELS_DIR, NETS[args.demo_net][0],
'faster_rcnn_alt_opt', 'faster_rcnn_test.pt')
caffemodel = os.path.join(cfg.DATA_DIR, 'faster_rcnn_models',
NETS[args.demo_net][1])
if not os.path.isfile(caffemodel):
raise IOError(('{:s} not found.\nDid you run ./data/script/'
'fetch_faster_rcnn_models.sh?').format(caffemodel))
if args.cpu_mode:
caffe.set_mode_cpu()
else:
caffe.set_mode_gpu()
caffe.set_device(args.gpu_id)
cfg.GPU_ID = args.gpu_id
net = caffe.Net(prototxt, caffemodel, caffe.TEST)
print '\n\nLoaded network {:s}'.format(caffemodel)
print '~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~'
print 'Demo for CN image...'
return demo2(net, filename)
Do you think i should load all those librairies in modules of job server.submit?
I want to use the pp bacause detect_cn takes 2 minutes to give results
any ideas?
Yes, you should import all these modules when wou submit your function into the execution queue.
f1 = job_server.submit(detect_cn, (filename,),modules=("math","numpy", ...))

c++ embed python urllib

python 2.7.10(./Watcher/epMain.py):
import subprocess
import hashlib
import os
import sys
import zipfile
import httplib
#import urllib
#import urllib2
def letsbegin():
subprocess.call('a.exe')
httpClient = httplib.HTTPConnection('www.google.com', 80, timeout=30)
httpClient.request('GET', '/updata/Client_V.html')
response = httpClient.getresponse()
targetV = response.read()
letsbegin()
c++:
Py_Initialize();
PyRun_SimpleString("import sys");
PyRun_SimpleString("sys.path.append('./Watcher')");
PyObject *pyMain = PyImport_ImportModule("epMain")
The pyMain is always NULL but after I change my python code to:
import subprocess
import hashlib
import os
import sys
import zipfile
#import httplib
#import urllib
#import urllib2
def letsbegin():
subprocess.call('a.exe')
httpClient = httplib.HTTPConnection('www.google.com', 80, timeout=30)
httpClient.request('GET', '/updata/Client_V.html')
response = httpClient.getresponse()
targetV = response.read()
letsbegin()
then it is ok to load this module in my c++ code
but I really want to use httplib in this project,How? I can't use:
PyImport_ImportModule("httplib")
because python code may update often.
besides,when I use
d:\pros\go\Watcher>python epMain.py
it works!
urllib and urllib2 also have problems like this.
It seems like you compile with Python 3.x include/libs instead of 2.x.
In Python 3.x, httplib, urllib2 is not available. (They are renamed to http.client' andurllib.request,urllib.error`)
Change compile option to include, link with Python 2.x.
UPDATE
To check which version the C++ program using, try the following code:
Py_Initialize();
PyRun_SimpleString("import sys");
PyRun_SimpleString("print(sys.version)");
...

Python printing a string yields an unexpected result

Trying to print / work with a specific String is driving me crazy in Python - or to specify this: I am using Jython.
The simple command
print "appilog.xxxxx.xxxxx.xxxxxxx"
results in a print of something looking like a java package
com.xxxxx.xxxxx.xxxxxx
Does Python/Jython do any special lookup for strings? Is there a way to enforce the usage of the "original" string I entered before?
Other things I tried are the following:
print ("appilog...")
print r"appilog..."
print str("appilog...")
print str(r"appilog...")
Imports used in the script this command is located in are the following:
from com.hp.ucmdb.discovery.probe.services.dynamic.core import EnvironmentInformation
#coding=utf-8
import string
import re
import sys
import os
import ConfigParser
import shutil
import StringIO
import logger
import modeling
import time
import subprocess
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.PrintStream;
import java.util.HashMap;
import java.util.Iterator;
import java.util.Map;
import javax.management.MBeanServerConnection;
import javax.management.ObjectName;
import javax.management.openmbean.CompositeDataSupport;
import javax.management.openmbean.CompositeType;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;
import datetime
from appilog.common.system.types.vectors import ObjectStateHolderVector

Categories