Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
The standard way to test VHDL code logic is to write a test bench in VHDL and utilize a simulator like ModelSim; which, I have done numerous times.
I have heard that instead of writing test benches in VHDL, engineers are now using Python to test there VHDL code.
Questions:
How is this done?
Is this done by writing a test bench in Python and then compiling this Python file or linking into Modelsim?
Is this done in Python using a module like myHDL and then linking/importing your VHDL file into Python? Is so, how is the timing diagram generated?
When writing a test bench in Python can you use standard Python coding/modules or just a module like myHDL?
For example if I want to test a TCP/IP stack in VHDL, can I use the socket module in Python to do this (i.e. import socket)?
Is there a reference, paper, or tutorial that shows how to do this? I've checked the Xilinx, Altera, and Modelsim websites but could not find anything.
The only thing I find online about using Python for FPGA are a few packages: with myHDL being the most referenced.
Why Python is good for this
Python is an efficient language for writing reference models in or to process and verify output files of your testbench. Many things take much less time to program in Python than in a VHDL testbench.
One key feature of Python that makes it so suitable is the automatic usage of big integers. If your VHDL uses >64 bit widths, it can be tedious to interface with that in other languages. Also when verifying arithmetic functionality, e.g. Python's Decimal package is a convenient tool to build high-precision references.
How can this be done?
This is an example that does not rely on any other software tools.
A straightforward practical way to interface Python is via input and output files. You can read and write files in your VHDL testbench as it is running in your simulator. Usually each line in the file represents a transaction.
Depending on your needs, you can either have the VHDL testbench compare your design outputs to a reference, or read the output file read back into a Python program. If using files takes too long, at least in Linux it's easy to set up a pipe with mkfifo that you can use as if it were a file, and you can then have your Python program and simulator read from and write to in sync.
Using standard Python modules
With the method I described, you'll have to find a way to separate whatever that module is doing into a stream of data or raw transactions. Unfortunately for your example of a TCP/IP stack, I see no straightforward way to do this and it would probably require significant clarification to discuss further.
More information
Several useful things are linked to in the comments above, especially the link posted by user1155120: https://electronics.stackexchange.com/questions/106784/can-you-interface-a-modelsim-testbench-with-an-external-stimuli
Maybe it is better to design whole design in some other language like SystemC/myHdl/HWToolkit test it and then only export it to your target language.
In order to simulate HDL and drive it from regular python code you have to use framework like cocotb/PyCoRAM and others. Still it gets complicated really fast.
One of the best approaches I have found is to use simulator + agents with queues/abstract memory and in code work only with this queues and not simulator or model itself. It makes writing of testbenches/verification simple.
In python-hw word only HWToolkit currently uses this approach:
tx_rx_test.py
You could write IO from python to text files and use your traditional File-IO VHDL methods to stimulate the DUT.
But, you are better off performing verification in a native environment using a language that was developed to perform functional verification. Using Systemverilog and the Modelsim superset packaged as "Questa Prime" (if you have access to it) is much better.
Assuming you want your python file(s) to be a part of your real-time simulation in terms of transfer functions and/or stimulus generation, they will need to be converted to shared-object files. everything else is post-processing, and is not simulator dependent. e.g. writing output files for later comparison. Non-native simulations, using shared object files, require access via VPI/VLI/FLI/PLI/DPI (depending on what language/tool you are using) which then introduces the overhead of passing the kernel back and forth across the interface. i.e. stop sim, work external, run sim, stop sim, etc.
Systemverilog (IEEE-1800) was developed for IC designers, by IC designers to address the issue of functional verification. while I am all for re-use in terms of .so files from system engineering simulations, if you are writing the testbench (which you shouldn't write your own TB, but that is another discussion), you would be better off using SV given all the built-in functions for constraining stimulus generation, functional coverage and reporting via UCDB.
you should have a reference manual in your Modelsim install to describe using the Programming Interface. you can learn more here : https://verificationacademy.com/ but, it is highly SV-centric... Unfortunately, VHDL is not a verification language, even with the additions they have made to be more like SV, it is, still, better categorized as a modeling language.
Check out PyVHDL. It integrates Python and VHDL with an Eclipse GUI with editing, simulation and VCD generation & display.
GUI Image: https://i.stack.imgur.com/BvQWN.jpg
Related
We have a sizable code base in Perl. For the forseeable future, our codebase will remain in Perl. However, we're looking into adding a GUI-based dashboard utility. We are considering writing the dashboard in Python (using tkinter or wx). The problem, however, is that we would like to leverage our existing Perl codebase in the Python GUI.
So... any suggestions on how achieve this? We are considering a few options:
Write executables (in Perl) that mimic function calls; invoke those Perl executables in python as system calls.
Write Perl executables on-the-fly inside the Python dashboard, and invoke the (temporary) Perl executable.
Find some kind of Perl-to-Python converter or binding.
Any other ideas? I'd love to hear if other people have confronted this problem. Unfortunately, it's not an option to convert the codebase itself to Python at this time.
I hate to be another one in the chorus, but...
Avoid the use of an alternate language
Use Wx so it's native look and feel makes the application look "real" to non-technical audiences.
Download the Padre source code and see how it does Wx Perl code, then steal rampantly from it's best tricks or maybe just gut it and use the application skeleton (using the Artistic half of the Perl dual license to make it legal).
Build your own Strawberry Perl subclass to package the application as an MSI installer and push it out across the corporate Active Directory domain.
Of course, I only say all this because you said "Dashboard" which I read as "Corporate", which then makes me assume a Microsoft AD network...
You can spawn a child process and use an IPC mechanism like sockets or STDIO, or even embed one interpreter in the other.
But why switch languages when Perl offers several Tk (Tk, Tkx, and Tcl::Tk) bindings and a very capable Wx binding?
I have written and distributed GUI projects with Perl's Tk and Wx libraries.
If you are need the ability to create stand-alone executables, check out PAR::Packer, ActiveState's PerlApp, and Cava Pacakger.
Try the CPAN distribution Python (pyperl) for interfacing with python code.
Well, if you really want to write the GUI in another language (which, seriously, is just a bad idea, since it will cost you more than it could ever benefit you), the thing you should do is the following:
Document your Perl app in terms of the services it provides. You should do it with XML Schema Definition - XSD - for the data types and Web Service Description Language - WSDL - for the actual service.
Implement the services in Perl, possibly using Catalyst::Controller::SOAP, or just XML::Compile::SOAP.
Consume the services from your whatever-language GUI interface.
Profit.
But honestly, I really suggest you taking a look at the Perl GTK2 binding, it is awesome, including features such as implementing a Gtk class entirely in Perl and using it as argument to a function written in C - for instance, you can write a model class for a gtk tree entirely in Perl.
I'd avoid inter-language calls if possible; fragility and massive increases in dependencies await you down this road. However, there is...
Inline::Python
If python must be used, the Inline::* series of modules have been generally well received. This lets you write python inside a perl script. You still have to write perl but it would let you use python libraries inside perl scripts. It will make things more difficult to debug, though.
I'd think the major criterion for any qualified answer would involve the details of the existing codebase. How is this Perl code called and how does it return it's results?
A collection of command line utilities returning results through reasonably good textual output ("good" as in "amenable to further machine parsing" or "pipeline friendly") ... should be reasonably easy to call from any programming language (and Python's excellent subprocess and multiprocessing modules in particular). A collection of web CGI or other modules layered between Apache and some DBMS system could still be accessed with things like urlopen2 or mechanize -- but it might be better to bypass the Perl code and write Python to query the underlying (presumably canonical) model (data store).
If the majority of the codebase is a set of libraries or modules ... and the functionality that your proposed dashboard requires isn't already exposed via some higher level mechanism (some command line interface, networking protocol, etc) ... then it's basically insane to consider interfacing to it through any language other than Perl. (Unless, by some strange and extremely unlikely twist of fate, your existing codebase and your intended implementation target are both already stable under Parrot).
Let's ask a different, broader, question: What interface to you intend to use between your dashboard and your existing code base?
This question is paramount regardless of your choice of implementation language. If you write the dashboard in Perl it still needs to call into your existing code base in some way. You probably need to fix-up your code base to implement support for whatever your going to use for your dashboard. At the point where your codebase supports the necessary API (has command line or IPC protocol calls into the desired functionality which return results over any reasonable IPC mechanism) ... then your choice of dashboard implementation language will be essentially arbitrary.
Interesting project: I would opt for loose-coupling and consider an XML-RPC or JSON based approach.
Find some kind of Perl-to-Python converter
Try out my pythonizer - it does a pretty good job at that!
I want to write some code to do acoustic analysis and I'm trying to determine the proper tool(s) for the job. I would normally write something like this in Python using numpy and scipy and possibly Cython for the analysis part. I've discovered that the world of Python audio libraries is a bit chaotic, with scads of very limited packages in various states of development.
I've also come across a bunch of audio/acoustic specific languages like SuperCollider, Faust, etc. that seem to make the audio processing easy but may be limited in terms of IO and analysis capability.
I'm currently working on Linux with Alsa and PulseAudio installed by default. I would prefer not to involve and of the various and sundry other audio packages like Jack if possible, though that is not a hard requirement.
My primary interest in this question is to determine whether there is a domain specific language that will provide for quicker prototyping and testing or whether a general language like Python is more appropriate. Thanks.
I've got a lot of experience with SuperCollider and Python (with and without Numpy). I do a lot of audio analysis, and I'm afraid the answer depends on what you want to do.
If you want to create systems that will input OR output audio in real time, then Python is not a good choice. The audio I/O libraries (as you say) are a bit sketchy. There's also a fundamental issue that Python's garbage collector is not really designed for realtime stuff. You should use a system that is designed from the ground up for realtime. SuperCollider is nice for this, and as caseyanderson notes, some of the standard building-blocks for audio analysis are right there. There are other environments too.
If you want to do hardcore work such as applying various machine learning algorithms, not necessarily in real time (i.e. if you can get away with reading/writing WAV files rather than live audio), then you should use a general-purpose programming language with wide support, and an ecosystem of good libraries for the extra things you want. Using Python with libs such as numpy and scikits-learn works great for this. It's good for quick prototyping, but not only does it lack solid realtime audio, it also has far fewer of the standard audio building-blocks. Those are two important things which hold you back when prototyping audio pipelines.
So, then, you're caught between these two options. Depending on your application you may be able to combine the two by manipulating the audio I/O in a realtime environment, and using OSC messaging or shell scripts to communicate with an external Python process. The limitation there is that you can't really throw masses of data around between the two (you can't sensibly pipe all your audio across to some other process, that'd be silly).
SuperCollider has lots of support for things along these lines, both as externals/plugins or Quarks. That said, it depends exactly what you want to do. If you are simply looking to detect events, Onsets.kr would be fine. If you are looking for frequency/pitch information, Pitch or Tartini would work (I find Tartini to be more accurate). If you are trying to track amplitude, a combination of Amplitude.ar and some simple math would also work.
Similarly, there is SpecCentroid.kr (for a kind of brightness analysis), Loudness.kr, SpecFlatness.kr, etc.
The above are all pretty general, and there are lots more (the JoshUGens externals package has some interesting FFT-related acoustics stuff). So I would recommend downloading the program, joining the mailing list (if you have further questions), which lives here, and poking around in the Externals, Quarks, and Standard UGens.
Nonetheless, since I am not sure what you are trying to do, I cannot make more concrete recommendations than the above combined with my feeling that it makes the most sense to go to SC for this, rather than writing all of your own tools in Python from scratch.
I'm not 100% sure what you want to do, but as an additional suggestion I would put forth: Spear with scripting in Common Lisp. If what you are doing involves a great deal of spectral analysis, then you can do the heavy Lifting in Spear, and script all of this using Common List with Common Music. Spear has some great tools in terms of editing out very specific partials.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
What is Python used for and what is it designed for?
Python is a dynamic, strongly typed, object oriented, multipurpose programming language, designed to be quick (to learn, to use, and to understand), and to enforce a clean and uniform syntax.
Python is dynamically typed: it means that you don't declare a type (e.g. 'integer') for a variable name, and then assign something of that type (and only that type). Instead, you have variable names, and you bind them to entities whose type stays with the entity itself. a = 5 makes the variable name a to refer to the integer 5. Later, a = "hello" makes the variable name a to refer to a string containing "hello". Static typed languages would have you declare int a and then a = 5, but assigning a = "hello" would have been a compile time error. On one hand, this makes everything more unpredictable (you don't know what a refers to). On the other hand, it makes very easy to achieve some results a static typed languages makes very difficult.
Python is strongly typed. It means that if a = "5" (the string whose value is '5') will remain a string, and never coerced to a number if the context requires so. Every type conversion in python must be done explicitly. This is different from, for example, Perl or Javascript, where you have weak typing, and can write things like "hello" + 5 to get "hello5".
Python is object oriented, with class-based inheritance. Everything is an object (including classes, functions, modules, etc), in the sense that they can be passed around as arguments, have methods and attributes, and so on.
Python is multipurpose: it is not specialised to a specific target of users (like R for statistics, or PHP for web programming). It is extended through modules and libraries, that hook very easily into the C programming language.
Python enforces correct indentation of the code by making the indentation part of the syntax. There are no control braces in Python. Blocks of code are identified by the level of indentation. Although a big turn off for many programmers not used to this, it is precious as it gives a very uniform style and results in code that is visually pleasant to read.
The code is compiled into byte code and then executed in a virtual machine. This means that precompiled code is portable between platforms.
Python can be used for any programming task, from GUI programming to web programming with everything else in between. It's quite efficient, as much of its activity is done at the C level. Python is just a layer on top of C. There are libraries for everything you can think of: game programming and openGL, GUI interfaces, web frameworks, semantic web, scientific computing...
Why should you learn Python Programming Language?
Python offers a stepping stone into the world of programming. Even though Python Programming Language has been around for 25 years, it is still rising in popularity.
Some of the biggest advantage of Python are it's
Easy to Read & Easy to Learn
Very productive or small as well as big projects
Big libraries for many things
What is Python Programming Language used for?
As a general purpose programming language, Python can be used for multiple things. Python can be easily used for small, large, online and offline projects. The best options for utilizing Python are web development, simple scripting and data analysis. Below are a few examples of what Python will let you do:
Web Development:
You can use Python to create web applications on many levels of complexity. There are many excellent Python web frameworks including, Pyramid, Django and Flask, to name a few.
Data Analysis:
Python is the leading language of choice for many data scientists. Python has grown in popularity, within this field, due to its excellent libraries including; NumPy and Pandas and its superb libraries for data visualisation like Matplotlib and Seaborn.
Machine Learning:
What if you could predict customer satisfaction or analyse what factors will affect household pricing or to predict stocks over the next few days, based on previous years data? There are many wonderful libraries implementing machine learning algorithms such as Scikit-Learn, NLTK and TensorFlow.
Computer Vision:
You can do many interesting things such as Face detection, Color detection while using Opencv and Python.
Internet Of Things With Raspberry Pi:
Raspberry Pi is a very tiny and affordable computer which was developed for education and has gained enormous popularity among hobbyists with do-it-yourself hardware and automation. You can even build a robot and automate your entire home. Raspberry Pi can be used as the brain for your robot in order to perform various actions and/or react to the environment. The coding on a Raspberry Pi can be performed using Python. The Possibilities are endless!
Game Development:
Create a video game using module Pygame. Basically, you use Python to write the logic of the game. PyGame applications can run on Android devices.
Web Scraping:
If you need to grab data from a website but the site does not have an API to expose data, use Python to scraping data.
Writing Scripts:
If you're doing something manually and want to automate repetitive stuff, such as emails, it's not difficult to automate once you know the basics of this language.
Browser Automation:
Perform some neat things such as opening a browser and posting a Facebook status, you can do it with Selenium with Python.
GUI Development:
Build a GUI application (desktop app) using Python modules Tkinter, PyQt to support it.
Rapid Prototyping:
Python has libraries for just about everything. Use it to quickly built a (lower-performance, often less powerful) prototype. Python is also great for validating ideas or products for established companies and start-ups alike.
Python can be used in so many different projects. If you're a programmer looking for a new language, you want one that is growing in popularity. As a newcomer to programming, Python is the perfect choice for learning quickly and easily.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
In my new job more people are using Python than Perl, and I have a very useful API that I wrote myself and I'd like to make available to my co-workers in Python.
I thought that a compiler that compiled Perl code into Python code would be really useful for such a task. Before trying to write something that parsed Perl (or at least, the subset of Perl that I've used in defining my API), I came across bridgekeeper from a consultancy.
It's almost certainly not worth the money for me to engage a consultancy to translate this API, but that's a really interesting tool.
Does anyone know of a compiler that will parse (or try to parse!) Perl5 code and compile it into Python? If there isn't such a thing, how should I start writing a simple compiler that parses my object-oriented Perl code and turns it into Python? Is there an ANTLR or YACC grammar that I can use as a starting point?
Edit: I found perl.y, which might be a starting point if I were to roll my own compiler.
James,
I recommend you to just rewrite the module in Python, for several reasons:
Parsing Perl is DARN HARD. Unless this is an important and desirable exercise for you, you'll find yourself spending much more time on the translation than on useful work.
By rewriting it, you'll have a great chance to practice Python. Learning is best done by doing, and having a task you really need done is a great boon.
Finally, Python and Perl have quite different philosophies. To get a more Pythonic API, it's best to just rewrite it in Python.
I think you should rewrite your code. The quality of the results of a parsing effort depends on your Perl coding style.
I think the quote below sums up the theoretical side very well.
From Wikipedia:Perl in Wikipedia
Perl has a Turing-complete grammar because parsing can be affected by run-time code executed during the compile phase.[25] Therefore, Perl cannot be parsed by a straight Lex/Yacc lexer/parser combination. Instead, the interpreter implements its own lexer, which coordinates with a modified GNU bison parser to resolve ambiguities in the language.
It is often said that "Only perl can parse Perl," meaning that only the Perl interpreter (perl) can parse the Perl language (Perl), but even this is not, in general, true. Because the Perl interpreter can simulate a Turing machine during its compile phase, it would need to decide the Halting Problem in order to complete parsing in every case. It's a long-standing result that the Halting Problem is undecidable, and therefore not even Perl can always parse Perl. Perl makes the unusual choice of giving the user access to its full programming power in its own compile phase. The cost in terms of theoretical purity is high, but practical inconvenience seems to be rare.
Other programs that undertake to parse Perl, such as source-code analyzers and auto-indenters, have to contend not only with ambiguous syntactic constructs but also with the undecidability of Perl parsing in the general case. Adam Kennedy's PPI project focused on parsing Perl code as a document (retaining its integrity as a document), instead of parsing Perl as executable code (which not even Perl itself can always do). It was Kennedy who first conjectured that, "parsing Perl suffers from the 'Halting Problem'."[26], and this was later proved.[27]
Starting in 5.10, you can compile perl with the experimental Misc Attribute Decoration enabled and set the PERL_XMLDUMP environment variable to a filename to get an XML dump of the parse tree (including comments - very helpful for language translators). Though as the doc says, this is a work in progress.
I never tried it and it seems unmaintained, but maybe PyPerl is an option?
How big is this API? If it really this useful then why don't you rewrite it in python. Writing an automatic converter will probably take longer then rewriting the API.
And even if you manage to automatically rewrite it, the resulting code probably won't be very pythonic anyway.
Be sure to check out the answers by weismat and eliben
As much as it might be fun to convert it to or rewrite it in python, I wouldn't make either of those my first choice. Then you'd be stuck with a forked code base. Any modifications you make will have to be duplicated.
Write some sort of wrapper for your API that you can access from outside of Perl. One possibility is a RESTful interface. Another, if you don't want to deal with networking issues, is to create a set of command line tools that access the API (possibly passing information as JSON). Then you can write an easy python library which accesses the wrapper API using httplib2 or subprocess (depending on how you've implemented the wrapper).
You'll still have to update the Python API whenever the interface changes, but now it's only for interface changes.
You could try writing a parser with PPI, dump it to some intermediary form and write Python mecanically from there. Hard, but doable. Useful? Er....
Or you could port your code to Perl 6, wait to Pynie to be ready enough to allow direct call from Python to Perl6 within the same runtime! It's not that far away after all. Too bad Ponie's dead though.
https://perthon.sourceforge.net could probably work? While it is still in alpha, I see a lot of potential.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I have a small lightweight application that is used as part of a larger solution. Currently it is written in C but I am looking to rewrite it using a cross-platform scripting language. The solution needs to run on Windows, Linux, Solaris, AIX and HP-UX.
The existing C application works fine but I want to have a single script I can maintain for all platforms. At the same time, I do not want to lose a lot of performance but am willing to lose some.
Startup cost of the script is very important. This script can be called anywhere from every minute to many times per second. As a consequence, keeping it's memory and startup time low are important.
So basically I'm looking for the best scripting languages that is:
Cross platform.
Capable of XML parsing and HTTP Posts.
Low memory and low startup time.
Possible choices include but are not limited to: bash/ksh + curl, Perl, Python and Ruby. What would you recommend for this type of a scenario?
Lua is a scripting language that meets your criteria. It's certainly the fastest and lowest memory scripting language available.
Because of your requirement for fast startup time and a calling frequency greater than 1Hz I'd recommend either staying with C and figuring out how to make it portable (not always as easy as a few ifdefs) or exploring the possibility of turning it into a service daemon that is always running. Of course this depends on how
Python can have lower startup times if you compile the module and run the .pyc file, but it is still generally considered slow. Perl, in my experience, in the fastest of the scripting languages so you might have good luck with a perl daemon.
You could also look at cross platform frameworks like gtk, wxWidgets and Qt. While they are targeted at GUIs they do have low level cross platform data types and network libraries that could make the job of using a fast C based application easier.
"called anywhere from every minute to many times per second. As a consequence, keeping it's memory and startup time low are important."
This doesn't sound like a script to me at all.
This sounds like a server handling requests that arrive from every minute to several times a second.
If it's a server, handling requests, start-up time doesn't mean as much as responsiveness. In which case, Python might work out well, and still keep performance up.
Rather than restarting, you're just processing another request. You get to keep as much state as you need to optimize performance.
When written properly, C should be platform independant and would only need a recompile for those different platforms. You might have to jump through some #ifdef hoops for the headers (not all systems use the same headers), but most normal (non-win32 API) calls are very portable.
For web access (which I presume you need as you mention bash+curl), you could take a look at libcurl, it's available for all the platforms you mentioned, and shouldn't be that hard to work with.
With execution time and memory cost in mind, I doubt you could go any faster than properly written C with any scripting language as you would lose at least some time on interpreting the script...
I concur with Lua: it is super-portable, it has XML libraries, either native or by binding C libraries like Expat, it has a good socket library (LuaSocket) plus, for complex stuff, some cURL bindings, and is well known for being very lightweight (often embedded in low memory devices), very fast (one of the fastest scripting languages), and powerful. And very easy to code!
It is coded in pure Ansi C, and lot of people claim it has one of the best C biding API (calling C routines from Lua, calling Lua code from C...).
If Low memory and low startup time are truly important you might want to consider doing the work to keep the C code cross platform, however I have found this is rarely necessary.
Personally I would use Ruby or Python for this type of job, they both make it very easy to make clear understandable code that others can maintain (or you can maintain after not looking at it for 6 months). If you have the control to do so I would also suggest getting the latest version of the interpreter, as both Ruby and Python have made notable improvements around performance recently.
It is a bit of a personal thing. Programming Ruby makes me happy, C code does not (nor bash scripting for anything non-trivial).
As others have suggested, daemonizing your script might be a good idea; that would reduce the startup time to virtually zero. Either have a small C wrapper that connects to your daemon and transmits the request back and forth, or have the daemon handle requests directly.
It's not clear if this is intended to handle HTTP requests; if so, Perl has a good HTTP server module, bindings to several different C-based XML parsers, and blazing fast string support. (If you don't want to daemonize, it has a good, full-featured CGI module; if you have full control over the server it's running on, you could also use mod_perl to implement your script as an Apache handler.) Ruby's strings are a little slower, but there are some really good backgrounding tools available for it. I'm not as familiar with Python, I'm afraid, so I can't really make any recommendations about it.
In general, though, I don't think you're as startup-time-constrained as you think you are. If the script is really being called several times a second, any decent interpreter on any decent operating system will be cached in memory, as will the source code of your script and its modules. Result: the startup times won't be as bad as you might think.
Dagny:~ brent$ time perl -MCGI -e0
real 0m0.610s
user 0m0.036s
sys 0m0.022s
Dagny:~ brent$ time perl -MCGI -e0
real 0m0.026s
user 0m0.020s
sys 0m0.006s
(The parameters to the Perl interpreter load the rather large CGI module and then execute the line of code '0;'.)
Python is good. I would also check out The Computer Languages Benchmarks Game website:
http://shootout.alioth.debian.org/
It might be worth spending a bit of time understanding the benchmarks (including numbers for startup times and memory usage). Lots of languages are compared such as Perl, Python, Lua and Ruby. You can also compare these languages against benchmarks in C.
I agree with others in that you should probably try to make this a more portable C app instead of porting it over to something else since any scripting language is going to introduce significant overhead from a startup perspective, have a much larger memory footprint, and will probably be much slower.
In my experience, Python is the most efficient of the three, followed by Perl and then Ruby with the difference between Perl and Ruby being particularly large in certain areas. If you really want to try porting this to a scripting language, I would put together a prototype in the language you are most comfortable with and see if it comes close to your requirements. If you don't have a preference, start with Python as it is easy to learn and use and if it is too slow with Python, Perl and Ruby probably won't be able to do any better.
Remember that if you choose Python, you can also extend it in C if the performance isn't great. Heck, you could probably even use some of the code you have right now. Just recompile it and wrap it using pyrex.
You can also do this fairly easily in Ruby, and in Perl (albeit with some more difficulty). Don't ask me about ways to do this though.
Can you instead have it be a long-running process and answer http or rpc requests?
This would satisfy the latency requirements in almost any scenario, but I don't know if that would break your memory footprint constraints.
At first sight, it's sounds like over engineering, as a rule of thumb I suggest fixing only when things are broken.
You have an already working application. Apparently you want to want to call the feature provided from few more several sources. It looks like the description of a service to me (maybe easier to maintain).
Finally you also mentioned that this is part of a larger solution, then you may want to reuse the language, facilities of the larger solutions. From the description you gave (xml+http) it seems quite an usual application that can be written in any generalist language (maybe a web container in java?).
Some libraries can help you to make your code portable:
Boost,
Qt
more details may trigger more ideas :)
Port your app to Ruby. If your app is too slow, profile it and rewrite the those parts in C.