I have to use a Webservice, where on my own webserver a script should make GET requests regularly. There exists a documentation with multiple C# examples. This should work (I could not get it running on my windows pc).
https://integration.questback.com/integration.svc
You have created a service.
To test this service, you will need to create a client and use it to call the service. You can do this using the svcutil.exe tool from the command line with the following syntax:
svcutil.exe https://integration.questback.com/Integration.svc?wsdl
This will generate a configuration file and a code file that contains the client class. Add the two files to your client application and use the generated client class to call the Service. For example:
C#
class Test
{
static void Main()
{
QuestBackIntegrationLibraryClient client = new QuestBackIntegrationLibraryClient();
// Use the 'client' variable to call operations on the service.
// Always close the client.
client.Close();
}
}
Since the server is linux based and I don´t know a piece of C# + XML, I wanted to ask if there is an way to make this run on linux server, preferable with Python (I know this question is quite vague, I´m sorry).
Thank you!
Related
Im new to Javascript so I would like to keep it at the bare minimum. Is there a way that I can use the Electron to communicate with python script without having node.js? My app is just a basic app that takes some input from users from a html page and I need this text input to be processed in python and write an excel file. So there is not much happening in html so is there a simple way to transfer the input to python file? I want to use Electron because I need this html to be my UI and also I need to distribute this app.
I guess the answer is "no": the main process running node will always be there.
An Electron app consists of a JavaScript main process, and one or more JavaScript renderer processes. There is no built-in Python support. And the user will need Python already installed. So, it sounds like a poor fit for what you need.
The answers here may be useful, and will show how to call the python script. I took a quick look at the flexx toolkit mentioned there. It seems to work with the user's browser, rather than producing a single executable.
Recently i have done it with some sort of trick hope it will help you and there are the following step which i followed-
Created a stand alone python exe using pyinstaller and the exe has flask server internally then i put the flask server inside my node application.
Now we have to initiate our flask server and send a request to it for processing, i have done this with the help of "execFile" function as a child process, for which i have created a function and the code was something like that-
async function callFlask(){
var child = require('child_process').execFile;
child('path_to_python_exe ', function(err, data) {
if(err){
console.error(err);
return;
}
});
}
Now we have initiated our flask server then will send the request with the help of fetch request like
await callFlask().then(
await fetch('host_ip_defined_in_flask'+encodeURIComponent('data'))
Now further we can extend our then chain to get response from python if any and proceed further forexample -
await callFlask().then(
await fetch('host_ip_defined_in_flask'+encodeURIComponent('data'))
.then(res => res.text())
.then(body => console.log(body)))
Here, your output data which python return will be printed in console then you can make your node application behave differently depending on output returned by it.
Also you can package your app with available packagers for electron like electron-packager it will work like a charm.
Also there is are some disadvantage for using python as like it will increase your package size and the process will be difficult to kill from electron after processing so it will increase burden on host machine.
I am assuming that Explaining to create a flask server is not the scope of this question instead if you face any issues let me know, i hope it will help...
I know this is a real open-ended question, but I'm new to python and am building a simple one off web app to give a non technical team some self service capabilities. This team has a bunch of repetitive tasks that they kick over to another team that are just begging to be automated, like restarting a few processes on remote hosts, grep logs, cleanup old files, deploy/restart new versions of an application, get current running versions, etc. The users will be clicking buttons and watching the output in the GUI, they WILL NOT be manually entering commands to run (I know this is dangerous). Any new tasks will be scripted and added to the app from the technical support team.
Now, the only piece I'm not sure on is how to get (near) real time output from the commands back to the GUI. I've built a very similar app in PHP in the past, and what I did was flush the output of the remote commands to a db, and then would poll the db with ajax and append new output. It was pretty simple and worked great even though the output would come back in chunks (I had the output written to the GUI line by line, so it looked like it was real time). Is there a better way to do this? I was thinking of using web sockets to push the output of the command back to the GUI. Good idea? Bad idea? Anything better with a python library? I can also use nodejs, if that makes any difference, but I'm new to both languages (I do already have a simple python flask application up and running that acts as an API to glue together a few business applications, not a big deal to re-write in node).
This is a broad question, but I'll give you few clues.
Nice example is LogIo. Once you are willing to run some commands and than push output to GUI, using Node.js becomes natural approach. This app may contain few elements:
part one that runs commands and harvests output and pushes it to
part two that receives output and saves it to DB/files. After save, this part is throwing event to
part three, that should be a websocket server, which will handle users that are online and distribute events to
part four, which would be preoperly scripted GUI that is able to connect via websocket to part three, log-in user, receive events and broadcast them to other GUI elements.
Once I assume you feel stronger with PHP than python, for you easiest approach would be to create part two as a PHP service to handle input (save harvested output to db) and than, let say use UDP package to part three's UDP listening-socket.
Part one would be python script to just get command output and bypass it properly to part two. It should be as easy to hadle as usual grep case:
tail -f /var/log/apache2/access.log | /usr/share/bin/myharvester
at some point of developing it you will be in demand of passing there also user or unical task id as parameter after myharvester.
Tricky but easier than you think will be to create a Node.js cript as part three. As a single instance script it should be able to receive input and bypass it to users as events. I've commited comething like this before:
var config = {};
var app = require('http').createServer().listen(config.server.port);
var io = require('socket.io').listen(app);
var listenerDgram = require('dgram').createSocket('udp4');
listenerDgram.bind(config.listeners.udp.port);
var sprintf = require('sprintf').sprintf;
var users = [];
app.on('error', function(er) {
console.log(sprintf('[%s] [ERROR] HTTP Server at port %s has thrown %s', Date(), config.server.port, er.toString()));
process.exit();
});
listenerDgram.on('error', function(er) {
console.log(sprintf('[%s] [ERROR] UDP Listener at port %s has thrown %s', Date(), config.listeners.udp.port, er.toString()));
process.exit();
});
listenerDgram.on('message', function(msg, rinfo) {
// handling, let's say, JSONized msg from part two script,
// buildinf a var frame and finally
if(user) {
// emit to single user based on what happened
// inside this method
users[user].emit('notification', frame);
} else {
// emit to all users
io.emit('notification', frame);
}
});
io.sockets.on('connection', function(socket) {
// handling user connection here and pushing users' sockets to
// users aray.
});
This scrap is basic example of not filled with logic what-you-need. Script should be able to open UDP listener on given port and to listen for users running into it within websockets. Honestly, once you become good in Node.js, you may want to fix both part two + part three with it, what will take UDP part off you as harvester will push output directly to script, that maintains websocket inside it. But it has a drawback of duplicating some logic from other back-end as CRM.
Last (fourth) part would be to implement web interface with JavaScript inside, that connects currently logged user to socket server.
I've used similar approach before, and it is working real-time, so we can show our Call-Center employees information about incoming call before even phone actually start to ring. Finally solutions (not counting interface of CRM) closes in two scripts - dedicated CRM API part (where all logic happen) to handle events from Asterisk and Node.js event forwarder.
I'm using Py2neo in a project. Most of the time the neo4j server runs on localhost so in order to connect to the graph I just do:
g = Graph()
But when I run tests I'd like connect to a different graph, preferably one I can trash without any consequencews.
I'd like to have a "production" graph, possibly set up in such a way that even though it also runs on localhost, the tests won't have access to it.
Can this be done?
UPDATE 0 - A better way to put this question might have been how can I get my locahost Neo4J to serve up 2 databases on two different ports? Once I've got that working it's trivial ot use the REST client to connect to one or the other. I'm running the latest .deb version of Neo4J on an Ubuntu workstation (if that matters).
You can have multiple instances of Neo4j running on the same machine by configuring them to use different ports, i.e. 7474 for development and 7473 for tests.
Graph() defaults to http://localhost:7474/db/data/ but you can also pass a connection URI explicitly:
dev = Graph()
test = Graph("http://localhost:7473/db/data/")
prod = Graph("https://remotehost.com:6789/db/data/")
You can run neo4j server on a different machine and access it through REST service.
Inside the neo4j-server.properties, you can uncomment the line where it says IP address of 0.0.0.0
This would allow that server to be accessed from any place. Now I dont what with Python, but with Java I am using Java Rest library to access that server using the Java Rest Library for Neo4j. Take a look here
https://github.com/rash805115/bookeeping/blob/master/src/main/java/database/service/impl/Neo4JRestServiceImpl.java
Update 0: There are three ways to complete your wish.
Method 1: Start neo4j instance on a separate machine. Then access that instance using some REST API. The way to do that would be to go in conf/neo4j-server.properties and then to find this line and uncomment it.
#org.neo4j.server.webserver.address=0.0.0.0
Method 2: Start two neo4j instances on the same machine but different port and use the REST service to access those. To do this copy the neo4j distribution into two separate folders. Then change this line in conf/neo4j-server.properties and change the port in atleast one if them.
First Instance - org.neo4j.server.webserver.port=7474
org.neo4j.server.webserver.https.port=7473
Second Instance - org.neo4j.server.webserver.port=8484
org.neo4j.server.webserver.https.port=8483
Method 3: From your comments it appears you want to do this and indeed this is the easiest method. Have two separate databases on the same Neo4J Instance. For you to do this you dont have to change any configuration files, just a line in your code. I have not done this in python exactly, but I have done the same in Java. Let me give you the Java code and you can see how easy it is.
Production Code:
package rash.experiments.neo4j;
import org.neo4j.cypher.javacompat.ExecutionEngine;
import org.neo4j.cypher.javacompat.ExecutionResult;
import org.neo4j.graphdb.GraphDatabaseService;
import org.neo4j.graphdb.Transaction;
import org.neo4j.graphdb.factory.GraphDatabaseFactory;
public class Neo4JEmbedded
{
public static void main(String args[])
{
GraphDatabaseService graphDatabaseService = new GraphDatabaseFactory().newEmbeddedDatabase("db/productiondata/");
ExecutionEngine executionEngine = new ExecutionEngine(graphDatabaseService);
try(Transaction transaction = graphDatabaseService.beginTx())
{
executionEngine.execute("create (node:Person {userId: 1})");
transaction.success();
}
ExecutionResult executionResult = executionEngine.execute("match (node) return count(node)");
System.out.println(executionResult.dumpToString());
}
}
Test Code:
package rash.experiments.neo4j;
import org.neo4j.cypher.javacompat.ExecutionEngine;
import org.neo4j.cypher.javacompat.ExecutionResult;
import org.neo4j.graphdb.GraphDatabaseService;
import org.neo4j.graphdb.Transaction;
import org.neo4j.graphdb.factory.GraphDatabaseFactory;
public class Neo4JEmbedded
{
public static void main(String args[])
{
GraphDatabaseService graphDatabaseService = new GraphDatabaseFactory().newEmbeddedDatabase("db/testdata/");
ExecutionEngine executionEngine = new ExecutionEngine(graphDatabaseService);
try(Transaction transaction = graphDatabaseService.beginTx())
{
executionEngine.execute("create (node:Person {userId: 1})");
transaction.success();
}
ExecutionResult executionResult = executionEngine.execute("match (node) return count(node)");
System.out.println(executionResult.dumpToString());
}
}
Note the difference in line:
GraphDatabaseService graphDatabaseService = new GraphDatabaseFactory().newEmbeddedDatabase("db/testdata/");
This creates two separate folders db/productiondata and db/testdata. Both of these folders contains separate data and your code can use either folder based on your requirement.
I am pretty sure, in your python code you have to do almost the same thing. Something like (Note that this code might not be correct):
g = Graph("/db/productiondata")
g = Graph("/db/testdata")
Unfortunately, this is a problem without a perfect solution right now. There are however a few options available which may suffice for what you need.
First, have a look at the py2neo build script: https://github.com/nigelsmall/py2neo/blob/release/2.0.5/bau
This is a bash script that spawns a new database instance for each version that needs testing, starting up with an empty store beforehand and closing down afterwards. It uses the default port 7474 but it should be an easy change to tweak this automatically in the properties file. Specifically here, you'll probably want to look at the test, neo4j_start and neo4j_stop functions.
Additionally, py2neo provides an extension called neobox:
http://py2neo.org/2.0/ext/neobox.html
This is intended to be a quick and simple way to set up new database instances running on free ports and might be helpful in this case.
Note that generally speaking, clearing down the data store between tests is a bad idea as this is a slow operation and can seriously impact the running time of your test suite. For that reason, a test database that lives for all tests is a better idea although requires a little thought when writing tests so as they don't overlap.
Going forward, Neo4j will gain DROP functionality to help with this kind of work but it will likely be a few releases before this appears.
Summary:
Is there a way to use the execute() function to pass a parameter to a Python script, and have the Python script use the parameter in its execution, then return the result to ExtendScript?
Context:
I'm building a script for Illustrator that has to query a web service, process the resultant XML file, and return the results to the user. This would be easy if I were using one of the applications that support the Socket feature, but Illustrator doesn't. My next thought, was that I can achieve the HTTP request and XML parsing in Python. I'm at a loss on how to bridge the two.
Option 1 (BridgeTalk)
I had to do something like this to run an external PNG processor from both Photoshop and Illustrator. Neither of those applications have the ability to execute external programs from ExtendScript. (See Option 2.) Adobe Bridge's app object has a system method that executes a command in the system shell. Using a BridgeTalk object, you can call that method remotely from Illustrator. You'll only get the exit code in return, though. So you'll need to redirect your program's output to a file and then read that file in your script.
Here's an example of using BridgeTalk and Adobe Bridge to run an external program:
var bt = new BridgeTalk();
bt.target = 'bridge';
bt.body = 'app.system("ping -c 1 google.com")';
bt.onResult = function (result) {
$.writeln(result.body);
};
bt.send();
Pros
Asynchronous
Can easily retrieve the exit code
Can use shell syntax and pass arguments to the program directly
Cons
Adobe Bridge must be installed
Adobe Bridge must be running (although BridgeTalk will launch it for you, if needed)
Option 2 (File.prototype.execute)
I discovered this later and can't believe I missed it. The File class has an execute instance method that opens or executes the file. It might work for your purposes, although I haven't tried it myself.
Pros
Asynchronous
Built into each ExtendScript environment (no inter-process communication)
Cons
Can't retrieve the exit code
Can't use shell syntax or pass arguments to the program directly
Extendscript does support Socket, following is the code snippet
reply = "";
conn = new Socket;
// access Adobe’s home page
if (conn.open ("www.adobe.com:80")) {
// send a HTTP GET request
conn.write ("GET /index.html HTTP/1.0\n\n");
// and read the server’s reply
reply = conn.read(999999);
conn.close();
}
I use to program on python. I have started few months before, so I am not the "guru" type of developer. I also know the basics of HTML and CSS.
I see few tutorials about node.js and I really like it. I cannot create those forms, bars, buttons etc with my knowledge from html and css.
Can I use node.js to create what user see on browser and write with python what will happen if someone push the "submit" button? For example redirect, sql write and read etc.
Thank you
You can call python scripts in the back end at the node server, in response to button click by user. For that you can use child_process package. It allows you to call programs installed on your machine.
For example here is how to run your script when user POST's something on /reg page:
app.post('/reg', function(request, response){
spawn = require('child_process').spawn;
path = "location of your script";
// create child process of your script and pass two arguments from the request
backend = spawn('python',[path, request.body.name, request.body.email]);
backend.on('exit', function(code) {
console.log(path + ' exited with code ' + code);
if(code==0)
response.render('success'); //show success page if script runs successfully
else
response.redirect('bad');
});
});
Python has to be installed in your system, along with other python libraries you will need. It cannot respond / redirect to requests to node, else why would you use node then. When in Rome, do as the Romans do. Use JavaScript in node, calling external programs is not as fast using JS libraries.
Node.js is a serverside JavaScript environment (like Python). It runs on the server and interacts with the database, generates the HTML that the clients see and isn't actually directly accessed by the browser.
Browsers, on the other hand, run clientside JavaScript directly.
If you want to use Python on the server, there are a bunch of frameworks that you can work with:
Django
Flask
Bottle
Web.py
CherryPy
many, many more...
I think you're thinking about this problem backwards. Node.js lets you run browser Javascript without a browser. You won't find it useful in your Python programming. You're better off, if you want to stick with Python, using a framework such as Pyjamas to write Javascript with Python or another framework such as Flask or Twisted to integrate the Javascript with Python.