NodeJS implementation for Python's pbkdf2_sha256.verify - python

I have to translate this Python code to NodeJS:
from passlib.hash import pbkdf2_sha256
pbkdf2_sha256.verify('12345678', '$pbkdf2-sha256$2000$8R7jHOOcs7YWImRM6V1LqQ$CIdNv8YlLlCZfeFJihZs7eQxBsauvVfV05v07Ca2Yzg')
>> True
The code above is the entire code, i.e. there is no othe parameters/settings (just run pip install passlib before you run it to install the passlib package).
I am looking for the correct implementation of validatePassword function in Node that will pass this positive implementation test:
validatePassword('12345678', '$pbkdf2-sha256$2000$8R7jHOOcs7YWImRM6V1LqQ$CIdNv8YlLlCZfeFJihZs7eQxBsauvVfV05v07Ca2Yzg')
>> true
Here is the documentation of the passlib.hash.pbkdf2_sha256 with its default parameters' values.
I tried to follow the answers from here with the data from the Python code above, but that solutions didn't pass the test.
I would appreciate some help with this implementation (preferably using built-in NodeJS crypto package).
Thank you in advance.

This would work:
const crypto = require('crypto')
function validatePassword(secret, format) {
let parts = format.split('$')
return parts[4] == crypto.pbkdf2Sync(secret, Buffer.from(parts[3].replace(/\./g, '+') + '='.repeat(parts[3].length % 3), 'base64'),
+parts[2], 32, parts[1].split('-')[1]).toString('base64').replace(/=/g, '').replace(/\+/g, '.')
}

I was not able to get this working with the other answers here, but they did lead me in the right direction.
Here's where I landed:
// eslint-2017
import crypto from 'crypto';
const encode = (password, { algorithm, salt, iterations }) => {
const hash = crypto.pbkdf2Sync(password, salt, iterations, 32, 'sha256');
return `${algorithm}$${iterations}$${salt}$${hash.toString('base64')}`;
};
const decode = (encoded) => {
const [algorithm, iterations, salt, hash] = encoded.split('$');
return {
algorithm,
hash,
iterations: parseInt(iterations, 10),
salt,
};
};
const verify = (password, encoded) => {
const decoded = decode(encoded);
const encodedPassword = encode(password, decoded);
return encoded === encodedPassword;
};
// <algorithm>$<iterations>$<salt>$<hash>
const encoded = 'pbkdf2_sha256$120000$bOqAASYKo3vj$BEBZfntlMJJDpgkAb81LGgdzuO35iqpig0CfJPU4TbU=';
const password = '12345678';
console.info(verify(password, encoded));
I know this is an old post, but it's one of the top results on Google, so figured I'd help someone out that comes across this in 2020.

You can use the crypto.pbkdf2 native node.js api
const crypto = require('crypto');
crypto.pbkdf2('secret', 'salt', 100000, 64, 'sha256', (err, derivedKey) => {
if (err) throw err;
console.log(derivedKey.toString('hex')); // '3745e48...08d59ae'
});
It is having the following api:
password <string>
salt <string>
iterations <number>
keylen <number>
digest <string>
callback <Function>
err <Error>
derivedKey <Buffer>
So you will need to play with the input variables to get the expected result as in python.
An alternative approach
I played with input variables, with not much success, and the simplest idea that I got is to make python scripts that validate the passwords and invoking it with child_process.spawn in node.js.

This worked for me based on node-django-hasher(didn't use it because depends on node-gyp)
function validatePassword(plain, hashed) {
const parts = hashed.split('$');
const salt = parts[2];
const iterations = parseInt(parts[1]);
const keylen = 32;
const digest = parts[0].split('_')[1];
const value = parts[3];
const derivedKey = crypto.pbkdf2Sync(plain, salt, iterations, keylen, digest);
return value === Buffer.from(derivedKey, 'binary').toString('base64');
}

Finally solved it. Because passlib does some transformations on the base64 encoded strings none of the mentioned solutions worked for me. I ended up writing my own node-module wich is tested with passlib 1.7.4 hashes. Thanks #kayluhb for pushing me in the right direction!
Feel free to use it: node-passlib

Related

Zlib is unable to extract a compress String in java, compression is done in python

Im trying to decompress a string in java, the string is compress in python with base64 encoding.
I tried a day to resolve the issue, The file can decode easily online and also in python.
Find similar post that people have trouble compressing and decompressing in java and python.
zip.txt
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.util.Base64;
import java.util.zip.DataFormatException;
import java.util.zip.Deflater;
import java.util.zip.Inflater;
public class CompressionUtils {
private static final int BUFFER_SIZE = 1024;
public static byte[] decompress(final byte[] data) {
final Inflater inflater = new Inflater();
inflater.setInput(data);
ByteArrayOutputStream outputStream =
new ByteArrayOutputStream(data.length);
byte[] buffer = new byte[data.length];
try {
while (!inflater.finished()) {
final int count = inflater.inflate(buffer);
outputStream.write(buffer, 0, count);
}
outputStream.close();
} catch (DataFormatException | IOException e) {
e.printStackTrace();
log.error("ZlibCompression decompress exception: {}", e.getMessage());
}
inflater.end();
return outputStream.toByteArray();
}
public static void main(String[] args) throws IOException {
decompress(Base64.getDecoder().decode(Files.readAllBytes(Path.of("zip.txt"))));
}
}
Error:
java.util.zip.DataFormatException: incorrect header check
at java.base/java.util.zip.Inflater.inflateBytesBytes(Native Method)
at java.base/java.util.zip.Inflater.inflate(Inflater.java:378)
at java.base/java.util.zip.Inflater.inflate(Inflater.java:464)
Tried also this after #Mark Adler suggestion.
public static void main(String[] args) throws IOException {
byte[] decoded = Base64.getDecoder().decode(Files.readAllBytes(Path.of("zip.txt")));
ByteArrayInputStream in = new ByteArrayInputStream(decoded);
GZIPInputStream gzStream = new GZIPInputStream(in);
decompress(gzStream.readAllBytes());
gzStream.close();
}
java.util.zip.DataFormatException: incorrect header check
at java.base/java.util.zip.Inflater.inflateBytesBytes(Native Method)
at java.base/java.util.zip.Inflater.inflate(Inflater.java:378)
at java.base/java.util.zip.Inflater.inflate(Inflater.java:464)
at efrisapi/com.efrisapi.util.CompressionUtils.decompress(CompressionUtils.java:51)
at efrisapi/com.efrisapi.util.CompressionUtils.main(CompressionUtils.java:67)
That is a gzip stream, not a zlib stream. Use GZIPInputStream.
I was looking in to wrong direction, the string was gzipped, below is the code to resolve it. Thank you #Mark Adler for identifying the issue.
public static void deCompressGZipFile(String gZippedFile, String newFile) throws IOException {
byte[] decoded = Base64.getDecoder().decode(Files.readAllBytes(Path.of(gZippedFile)));
ByteArrayInputStream in = new ByteArrayInputStream(decoded);
GZIPInputStream gZIPInputStream = new GZIPInputStream(in);
FileOutputStream fos = new FileOutputStream(newFile);
byte[] buffer = new byte[1024];
int len;
while ((len = gZIPInputStream.read(buffer)) > 0) {
fos.write(buffer, 0, len);
}
// Keep it in finally
fos.close();
gZIPInputStream.close();
}

How can I embed a Python function that returns a string in C using cffi?

I'm trying to embed a Python function in C using PyPy and cffi. I'm following this guide from the PyPy documentation.
The problem is, all the examples I've found operate on ints, and my function takes a string and returns a string. I can't seem to figure out how to embed this function in C, as C doesn't seem to really have strings, rather making do with arrays of chars.
Here's what I've tried:
# interface.py
import cffi
ffi = cffi.FFI()
ffi.cdef('''
struct API {
char (*generate_cool_page)(char url[]);
};
''')
...
#ffi.callback("char[] (char[])")
def generate_cool_page(url):
# do some processing with BS4
return str(soup)
def fill_api(ptr):
global api
api = ffi.cast("struct API*", ptr)
api.generate_cool_page = generate_cool_page
--
// c_tests.c
#include "PyPy.h"
#include <stdio.h>
#include <stdlib.h>
struct API {
char (*generate_cool_page)(char url[]);
};
struct API api; /* global var */
int initialize_api(void)
{
static char source[] =
"import sys; sys.path.insert(0, '.'); "
"import interface; interface.fill_api(c_argument)";
int res;
rpython_startup_code();
res = pypy_setup_home(NULL, 1);
if (res) {
fprintf(stderr, "Error setting pypy home!\n");
return -1;
}
res = pypy_execute_source_ptr(source, &api);
if (res) {
fprintf(stderr, "Error calling pypy_execute_source_ptr!\n");
return -1;
}
return 0;
}
int main(void)
{
if (initialize_api() < 0)
return 1;
printf(api.generate_cool_page("https://example.com"));
return 0;
}
When I run gcc -I/opt/pypy3/include -Wno-write-strings c_tests.c -L/opt/pypy3/bin -lpypy3-c -g -o c_tests and then run ./c_tests, I get this error:
debug: OperationError:
debug: operror-type: CDefError
debug: operror-value: cannot render the type <char()(char *)>: it is a function type, not a pointer-to-function type
Error calling pypy_execute_source_ptr!
I don't have a ton of experience with C and I feel like I'm misrepresenting the string argument/return value. How do I do this properly?
Thanks for your help!
Note that you should not be using pypy's deprecated interface to embedding; instead, see http://cffi.readthedocs.io/en/latest/embedding.html.
The C language doesn't have "strings", but only arrays of chars. In C, a function that wants to return a "string" is usually written
differently: it accepts as first argument a pointer to a pre-existing buffer (of type char[]), and as a second argument the length of that buffer; and when called, it fills the buffer. This can be messy because you ideally need to handle buffer-too-small situations in the caller, e.g. allocate a bigger array and call the function again.
Alternatively, some functions give up and return a freshly malloc()-ed char *. Then the caller must remember to free() it, otherwise a leak occurs. I would recommend that approach in this case because guessing the maximum length of the string before the call might be difficult.
So, something like that. Assuming you start with
http://cffi.readthedocs.io/en/latest/embedding.html, change
plugin.h to contain::
// return type is "char *"
extern char *generate_cool_page(char url[]);
And change this bit of plugin_build.py::
ffibuilder.embedding_init_code("""
from my_plugin import ffi, lib
#ffi.def_extern()
def generate_cool_page(url):
url = ffi.string(url)
# do some processing
return lib.strdup(str(soup)) # calls malloc()
""")
ffibuilder.cdef("""
#include <string.h>
char *strdup(const char *);
""")
From the C code, you don't need initialize_api() at all in the
new embedding mode; instead, you just say #include "plugin.h"
and call the function directly::
char *data = generate_cool_page("https://example.com");
if (data == NULL) { handle_errors... }
printf("Got this: '%s'\n", data);
free(data); // important!

NativeProcess communication giving error

I am trying to communicate to a python script through actionscript. it gives me error on line :
var stdOut:ByteArray = process.standardOutput;
from the function shown below :
public function onOutputData(event:ProgressEvent):void
{
var stdOut:ByteArray = process.standardOutput; //error
var data:String = stdOut.readUTFBytes(process.standardOutput.bytesAvailable);
trace("Got: ", data);
}
Error is:
Implicit coercion of a value with static type IDataInput to a possibly
unrelated type ByteArray.
I am following the same approach as on Adobe's page. Here is some testable code :
package
{
import flash.display.Sprite;
import flash.desktop.NativeProcessStartupInfo;
import flash.filesystem.File;
import flash.desktop.NativeProcess;
import flash.events.ProgressEvent;
import flash.utils.ByteArray;
public class InstaUtility extends Sprite
{
public var nativeProcessStartupInfo:NativeProcessStartupInfo = new NativeProcessStartupInfo();
public var file:File = new File("C:/Python27/python.exe");
public var process:NativeProcess = new NativeProcess();
public function InstaUtility()
{
nativeProcessStartupInfo.executable = file;
nativeProcessStartupInfo.workingDirectory = File.applicationDirectory.resolvePath(".");
trace("Location " + File.applicationDirectory.resolvePath(".").nativePath);
var processArgs:Vector.<String> = new Vector.<String>();
processArgs[0] = "test.py";
nativeProcessStartupInfo.arguments = processArgs;
var process:NativeProcess = new NativeProcess();
process.addEventListener(ProgressEvent.STANDARD_OUTPUT_DATA, onOutputData);
process.start(nativeProcessStartupInfo);
}
public function onOutputData(event:ProgressEvent):void
{
var stdOut:ByteArray = process.standardOutput; //error
var data:String = stdOut.readUTFBytes(process.standardOutput.bytesAvailable);
trace("Got: ", data);
}
}
}
The NativeProcess could not be started. Not supported in current
profile.
Are you testing in Flash IDE?
Test within IDE : In your AIR Publish Settings make sure you ticked only "extended Desktop" when debugging through IDE. This way you also get traces etc.
Test after Publish : You must tick both "Desktop" and "extended Desktop" and also tick "Windows Installer (.exe)". Install your App using the generated .exe file (not the .air file).
Implicit coercion of a value with static type IDataInput to a possibly
unrelated type ByteArray.
var stdOut:ByteArray = process.standardOutput; //error is not how it's done!! Don't make any var each time the progress event fires up. Each firing holds around 32kb or 64kb of bytes only (can't remember), so if the expected result is larger it will continue to fire in multiple chunks... Use and recycle a single public byteArray to hold all the result data.
Try a setup like below :
//# Declare the public variables
public var stdOut : ByteArray = new ByteArray();
public var data_String : String = "";
Your process also needs a NativeProcessExitEvent.EXIT listener.
process.addEventListener(NativeProcessExitEvent.EXIT, on_Process_Exit );
Before you .start a process, also clear the byteArray ready for new data with stdOut.clear();.
Now your progressEvent can look like this below... (Process puts result data into stdOut bytes).
public function onOutputData (event:ProgressEvent) : void
{
//var stdOut:ByteArray = process.standardOutput; //error
//# Progress could fire many times so keep adding data to build the final result
//# "stdOut.length" will be zero at first but add more data to tail end (ie: length)
process.standardOutput.readBytes( stdOut, stdOut.length, process.standardOutput.bytesAvailable );
//# Below should be in a Process "Exit" listener but might work here too
stdOut.position = 0; //move pointer back before reading bytes
data_String = stdOut.readUTFBytes( stdOut.length );
trace("function onOutputData -- Got : " + data_String );
}
But you really need to add an "onProcessExit" listener and then only check for results when the process itself has completed. (Tracing here is much safer for a guaranteed result).
public function on_Process_Exit (event : NativeProcessExitEvent) : void
{
trace ("PYTHON Process finished : ############# " )
stdOut.position = 0; //# move pointer back before reading bytes
data_String = stdOut.readUTFBytes( stdOut.length );
trace("PYTHON Process Got : " + data_String );
}

Converting Strings in Linux using SWIG for Python

I have a C++ class that is able to output strings in normal ASCII or wide format. I want to get the output in Python as a string. I am using SWIG (version 3.0.4) and have read the SWIG documentation. I'm using the following typemap to convert from a standard c string to my C++ class:
%typemap(out) myNamespace::MyString &
{
$result = PyString_FromString(const char *v);
}
This works fine in Windows with the VS2010 compiler, but it is not working completely in Linux. When I compile the wrap file under Linux, I get the following error:
error: cannot convert ‘std::string*’ to ‘myNamespace::MyString*’ in assignment
So I tried adding an extra typemap to the Linux interface file as so:
%typemap(in) myNamespace::MyString*
{
$result = PyString_FromString(std::string*);
}
But I still get the same error. If I manually go into the wrap code and fix the assignment like so:
arg2 = (myNamespace::MyString*) ptr;
then the code compiles just fine. I don't see why my additional typemap isn't working. Any ideas or solutions would be greatly appreciated. Thanks in advance.
It doesn't look like your typemap is using the arguments quite correctly. You should have something like this instead:
%typemap(out) myNamespace::MyString &
{
$result = PyString_FromString($1);
}
Where the '$1' is the first argument. See the SWIG special variables for more information [http://www.swig.org/Doc3.0/Typemaps.html#Typemaps_special_variables]
EDIT:
To handle the input typemap, you will need something like this:
%typemap(in) myNamespace::MyString*
{
const char* pChars = "";
if(PyString_Check($input))
{
pChars = PyString_AsString($input);
}
$1 = new myNamespace::MyString(pChars);
}
You can do more error checking and handle Unicode with the following code:
%typemap(in) myNamespace::MyString*
{
const char* pChars = "";
PyObject* pyobj = $input;
if(PyString_Check(pyobj))
{
pChars = PyString_AsString(pyobj);
$1 = new myNamespace::MyString(pChars);
}
else if(PyUnicode_Check(pyobj))
{
PyObject* tmp = PyUnicode_AsUTF8String(pyobj);
pChars = PyString_AsString(tmp);
$1 = new myNamespace::MyString(pChars);
}
else
{
std::string strTemp;
int rrr = SWIG_ConvertPtr(pyobj, (void **) &strTemp, $descriptor(String), 0);
if(!SWIG_IsOK(rrr))
SWIG_exception_fail(SWIG_ArgError(rrr), "Expected a String "
"in method '$symname', argument $argnum of type '$type'");
$1 = new myNamespace::MyString(strTemp);
}
}

Access violation on sqlite3_mutex_enter(). Why?

I'm using sqlite-amalgamation-3080500 within a Python3/C module.
My python module creates some tables and then returns the sqlite3's handle to the python environment using PyCapsule.
So, in a second module, I try to create more tables using this same sqlite3's handle. But my program is breaking. I get an "access violation error" into sqlite3_mutex_enter() - which has been called by sqlite3_prepare_v2().
First-chance exception at 0x00000000 in python.exe: 0xC0000005: Access
violation executing location 0x00000000. Unhandled exception at
0x7531C9F1 in python.exe: 0xC0000005: Access violation executing
location 0x00000000.
Is it really thread-safe? I think I can do it this way. I've already did it in the past, but I was using XCode on Mac. Now I'm trying to do the same on MSVC 2013.
Bellow is my code to run queries:
bool register_run(register_db_t *pReg, const char *query)
{
int ret, len;
sqlite3_stmt *stmt;
const char *err;
stmt = NULL;
len = (int)strlen(query);
ret = sqlite3_prepare_v2(pReg->pDb, query, len, &stmt, NULL);
if (ret != SQLITE_OK) {
err = sqlite3_errmsg(pReg->pDb);
fprintf(stderr, "sqlite3_prepare_v2 error: %s\n%s\n",
err, query);
return false;
}
ret = register_run_stmt(pReg, query, stmt);
sqlite3_finalize(stmt);
return ret;
}
And this is how I export the handle to use it in my 2nd C/module:
// Register's getattro
static PyObject* Register_getattro(RegisterObject *self, PyObject *name)
{
// ...
} else if (PyUnicode_CompareWithASCIIString(name, "handle") == 0) {
register_db_t *handle = self->db;
return PyCapsule_New(handle, NULL, NULL);
}
return PyObject_GenericGetAttr((PyObject *)self, name);
}
This is the python code gluing pieces:
import registermodule, loggermodule
reg = registermodule.Register("mydata.db")
loggermodule.set_register(reg.handle)
And how I use the the handle on my second module:
static PyObject* loggerm_set_register(PyObject *self, PyObject *args)
{
register_db_t *pReg;
PyObject *capsule;
if (!PyArg_ParseTuple(args, "O:set_register", &capsule)) {
return NULL;
}
if (!PyCapsule_CheckExact(capsule)) {
PyErr_SetString(PyExc_ValueError,
"The object isn't a valid pointer.");
return NULL;
}
pReg = PyCapsule_GetPointer(capsule, NULL);
if (!logger_set_register(pReg)) {
PyErr_SetString(PyExc_SystemError,
"Could not set the pointer as register.");
return NULL;
}
Py_RETURN_NONE;
}
And finally the routine that is breaking:
bool logger_set_register(register_db_t *pReg)
{
char *query = "CREATE TABLE IF NOT EXISTS tab_logger ("
"date NUMERIC,"
"level TEXT,"
"file TEXT,"
"function TEXT,"
"line INTEGER,"
"message TEXT)";
g_pReg = pReg;
return register_run(g_pReg, query);
}
And the sqlite3's routine that is breaking all:
SQLITE_API void sqlite3_mutex_enter(sqlite3_mutex *p){
if( p ){
sqlite3GlobalConfig.mutex.xMutexEnter(p);
}
}
Sorry about lots of snippets, but I've no clue about the problem.
Thanks in advance.
I don't know why, but globals are not the same between Python C modules on Windows. It wasn't on Mac OS, in despite of my previous experience doing so.
In Windows, python modules are DLLs, so they don't share the same global stack.
I've discovered that sqlite3Config.mutex was NULL to my second Python C module. It was causing the Access Violation error. But sqlite3Config.mutex is a global variable, this thing should be started by the previous module.
Now, knowing this point, I solved the problem calling this function:
sqlite3_initialize();
And all is working properly!
Not sure your problem is directly related to sqlite3_initialize();
Because sqlite3_initialize(); is automatically called, at least once, during sqlite3_open_v2.
I suggest to dig in the lack of the option SQLITE_OPEN_FULLMUTEX during sqlite3_open_v2.
Recommendation set this option always. Penalty of using Mutex is extremely low especially in view of all the overhead Python add.
Negligeable in single thread and mandatory (nearly) in multithread. I don't even understand why it stay an option. should be there always
So better safe than sorry. I don't know of real downside to use.
SQLite is "lite"
BTW it is legit for sqlite3Config.mutex to be NULL.
This should used only be used by SQLite that check this condition,
search sqlite3.c for "
sqlite3_mutex_enter(db->mutex);
" to understand what I mean.

Categories