glCreateProgram().uniform in python - python

I am trying to port some javascript code with opengl to python. But cannot figure out what I am doing wrong in translating prog.uniform[u] = gl.getUniformLocation(prog, u);
Javascript:
let v = buildShader(vert, gl.VERTEX_SHADER);
let f = buildShader(frag, gl.FRAGMENT_SHADER);
let prog = gl.createProgram();
gl.attachShader(prog, v);
gl.attachShader(prog, f);
gl.linkProgram(prog);
prog.uniform = {};
u = ['model','bounds','frac','aspect'];
_.each(u, function(u){ prog.uniform[u] = gl.getUniformLocation(prog, u); });
Python3/PyOpenGl:
v = self.buildShader(vert, GL_VERTEX_SHADER)
f = self.buildShader(frag, GL_FRAGMENT_SHADER)
prog = glCreateProgram()
glAttachShader(prog, v)
glAttachShader(prog, f)
glLinkProgram(prog)
for u in ['model','bounds','frac','aspect']:
loc = glGetUniformLocations(prog,u)
glProgramUniform(prog,loc,u)

glProgramUniform assigns a value to a uniform, where the 3rd paramter is the value.
glProgramUniform(prog,loc,u) makes not any sense, when u is string which is the name name of the uniform.
You have to create a dictionary which contains the locations of a uniform for each name:
uniform = {}
for u in ['model','bounds','frac','aspect']:
uniform[u] = glGetUniformLocation(prog, u)
or simply
uniform = { u : glGetUniformLocation(prog, u) for u in ['model','bounds','frac','aspect'] }

Related

How to solve Type Error : can't convert complex to float

##T in Fahrenheit, P in psia, gas FVF in bbl/cu.ft, gas viscosity in cP
import math
def GFVF(temp, pres, Gamma_G):
Z= ZFACT(temp,pres,Gamma_G)
T = TPC1(Gamma_G)
P = PPC1(Gamma_G)
return 0.0050035*Z*(T+459.69)/P
def GVISC(temp, pres, gamma_G):
temp = temp + 459.67
Mw = 28.964*gamma_G
Bg = GFVF(temp, pres, gamma_G)
Pg = 0.000007522*(Mw/Bg)
K = ((9.379+0.01607*Mw)*(temp**1.5))/(209.2+19.26*Mw+temp)
X = 3.448+(986.4/temp)+0.01009*Mw
Y = 2.447 - 0.2224*X
exponent = X*pow(Pg,Y)
Ug = (10**(-4))*K*(math.exp(exponent))
return Ug
When I run the function, I get an error at the last line: "Cannot convert complex to float", although I'm using float numbers I don't know how to resolve this error. I'm trying to cast, but it's not working.
That's my main:
gamma_G=0.65
y1_zfactor=list()
y2_fvf=list()
y3_visc=list()
x_pressure=list()
temp=170
for p in range(1000,5200,200):
pres = p
z_factor_value=ZFACT(temp,pres,gamma_G)
GFVF_value=GFVF(temp,pres,gamma_G)
GVISC_value=GVISC(temp,pres,gamma_G)
y1_zfactor.append(z_factor_value)
y2_fvf.append(GFVF_value)
y3_visc.append(GVISC_value)
x_pressure.append(p)
data_zfactor = {
"pressure": x_pressure,
"z factor":y1_zfactor
}
df_zfact = pd.DataFrame(data_zfactor)
print(df_zfact)
df_zfact.plot(kind='scatter',x='pressure',y='z factor')
plt.show()
data_gfvf = {
"pressure": x_pressure,
"FVF":y2_fvf
}
df_gfvf = pd.DataFrame(data_gfvf)
print(df_gfvf)
df_gfvf.plot(kind='scatter',x='pressure',y='FVF')
plt.show()
data_gvisc = {
"pressure": x_pressure,
"Viscosity":y3_visc
}
df_gvisc = pd.DataFrame(data_gvisc)
print(df_gvisc)
df_gvisc.plot(kind='scatter',x='pressure',y='Viscosity')
plt.show()
The code shows NameError (name 'ZFACT' is not defined), I don't know the function np.lib.scimath.power() helps you please check :)
lib.scimath.power(x, p)
Return x to the power p, (x**p).
If x contains negative values, the output is converted to the complex domain

Louvain algorithm for graph clustering gives completely different result when running in Spark/Scala and Python, why is that happening?

I am running community detection in graphs made from telecom CDR data. First I was working with very dense graphs containing 10000 nodes, and the algorithm was producing 150 to 170 communities per graph. I was using Louvain community detection algorithm implemented in Scala for Spark.
When I try to run the same algorithm but implemented in C#, I get around 10 communities per graph. I also did some testing with smaller graph, containing around 300 nodes, and same thing occur. When I run it in Spark with Scala I get around 50 communities. When I run it in python or C# I get from 8 to 10 communities.
I am really surprised to see such difference. Every implementation that I used (Scala, Python or C#) is referring to the paper by VD Blondel https://arxiv.org/abs/0803.0476, so the algorithm should be the same, but the output is completely different. Did anyone experienced something like that when using Spark/Scala vs. python/c#?
This is how the main class Louvain is called:
import org.apache.spark.graphx.{Edge, Graph}
import org.apache.spark.{SparkContext, SparkConf}
import org.apache.log4j.{Level, Logger}
object Driver {
def main(args: Array[String]): Unit ={
val config = LouvainConfig(
"src/data/input/file_with_edges.csv", //input file
"src/data/output/", //output dir
1, //parallelism
2000, //minimumComplessionProgress
1, //progressCounter
",") //delimiter
val sc = new SparkContext("local[*]", "Louvain")
val louvain = new Louvain()
louvain.run(sc, config)
}
}
This is Scala implementation that I am using:
import scala.reflect.ClassTag
import com.esotericsoftware.kryo.io.{Input, Output}
import com.esotericsoftware.kryo.serializers.DefaultArraySerializers.ObjectArraySerializer
import com.esotericsoftware.kryo.{Kryo, KryoSerializable}
import org.apache.spark._
import org.apache.spark.graphx._
import org.apache.spark.rdd.RDD
import org.apache.spark.SparkContext._
import org.apache.spark.graphx._
import org.apache.spark.broadcast.Broadcast
//import org.apache.spark.{Logging, SparkContext}
import org.apache.spark.{SparkContext}
class Louvain() extends Serializable{
def getEdgeRDD(sc: SparkContext, conf: LouvainConfig, typeConversionMethod: String => Long = _.toLong): RDD[Edge[Long]] = {
sc.textFile(conf.inputFile, conf.parallelism).map(row => {
val tokens = row.split(conf.delimiter).map(_.trim())
tokens.length match {
case 2 => new Edge(typeConversionMethod(tokens(0)),
typeConversionMethod(tokens(1)), 1L)
case 3 => new Edge(typeConversionMethod(tokens(0)),
typeConversionMethod(tokens(1)), tokens(2).toDouble.toLong)
case _ => throw new IllegalArgumentException("invalid input line: " + row)
}
})
}
/**
* Generates a new graph of type Graph[VertexState,Long] based on an
input graph of type.
* Graph[VD,Long]. The resulting graph can be used for louvain computation.
*
*/
def createLouvainGraph[VD: ClassTag](graph: Graph[VD, Long]):
Graph[LouvainData, Long] = {
val nodeWeights = graph.aggregateMessages(
(e:EdgeContext[VD,Long,Long]) => {
e.sendToSrc(e.attr)
e.sendToDst(e.attr)
},
(e1: Long, e2: Long) => e1 + e2
)
graph.outerJoinVertices(nodeWeights)((vid, data, weightOption) => {
val weight = weightOption.getOrElse(0L)
new LouvainData(vid, weight, 0L, weight, false)
}).partitionBy(PartitionStrategy.EdgePartition2D).groupEdges(_ + _)
}
/**
* Creates the messages passed between each vertex to convey
neighborhood community data.
*/
def sendCommunityData(e: EdgeContext[LouvainData, Long, Map[(Long, Long), Long]]) = {
val m1 = (Map((e.srcAttr.community, e.srcAttr.communitySigmaTot) -> e.attr))
val m2 = (Map((e.dstAttr.community, e.dstAttr.communitySigmaTot) -> e.attr))
e.sendToSrc(m2)
e.sendToDst(m1)
}
/**
* Merge neighborhood community data into a single message for each vertex
*/
def mergeCommunityMessages(m1: Map[(Long, Long), Long], m2: Map[(Long, Long), Long]) = {
val newMap = scala.collection.mutable.HashMap[(Long, Long), Long]()
m1.foreach({ case (k, v) =>
if (newMap.contains(k)) newMap(k) = newMap(k) + v
else newMap(k) = v
})
m2.foreach({ case (k, v) =>
if (newMap.contains(k)) newMap(k) = newMap(k) + v
else newMap(k) = v
})
newMap.toMap
}
/**
* Returns the change in modularity that would result from a vertex
moving to a specified community.
*/
def q(
currCommunityId: Long,
testCommunityId: Long,
testSigmaTot: Long,
edgeWeightInCommunity: Long,
nodeWeight: Long,
internalWeight: Long,
totalEdgeWeight: Long): BigDecimal = {
val isCurrentCommunity = currCommunityId.equals(testCommunityId)
val M = BigDecimal(totalEdgeWeight)
val k_i_in_L = if (isCurrentCommunity) edgeWeightInCommunity + internalWeight else edgeWeightInCommunity
val k_i_in = BigDecimal(k_i_in_L)
val k_i = BigDecimal(nodeWeight + internalWeight)
val sigma_tot = if (isCurrentCommunity) BigDecimal(testSigmaTot) - k_i else BigDecimal(testSigmaTot)
var deltaQ = BigDecimal(0.0)
if (!(isCurrentCommunity && sigma_tot.equals(BigDecimal.valueOf(0.0)))) {
deltaQ = k_i_in - (k_i * sigma_tot / M)
//println(s" $deltaQ = $k_i_in - ( $k_i * $sigma_tot / $M")
}
deltaQ
}
/**
* Join vertices with community data form their neighborhood and
select the best community for each vertex to maximize change in
modularity.
* Returns a new set of vertices with the updated vertex state.
*/
def louvainVertJoin(
louvainGraph: Graph[LouvainData, Long],
msgRDD: VertexRDD[Map[(Long, Long), Long]],
totalEdgeWeight: Broadcast[Long],
even: Boolean) = {
// innerJoin[U, VD2](other: RDD[(VertexId, U)])(f: (VertexId, VD, U) => VD2): VertexRDD[VD2]
louvainGraph.vertices.innerJoin(msgRDD)((vid, louvainData, communityMessages) => {
var bestCommunity = louvainData.community
val startingCommunityId = bestCommunity
var maxDeltaQ = BigDecimal(0.0);
var bestSigmaTot = 0L
// VertexRDD[scala.collection.immutable.Map[(Long, Long),Long]]
// e.g. (1,Map((3,10) -> 2, (6,4) -> 2, (2,8) -> 2, (4,8) -> 2, (5,8) -> 2))
// e.g. communityId:3, sigmaTotal:10, communityEdgeWeight:2
communityMessages.foreach({ case ((communityId, sigmaTotal), communityEdgeWeight) =>
val deltaQ = q(
startingCommunityId,
communityId,
sigmaTotal,
communityEdgeWeight,
louvainData.nodeWeight,
louvainData.internalWeight,
totalEdgeWeight.value)
//println(" communtiy: "+communityId+" sigma:"+sigmaTotal+"
//edgeweight:"+communityEdgeWeight+" q:"+deltaQ)
if (deltaQ > maxDeltaQ || (deltaQ > 0 && (deltaQ == maxDeltaQ &&
communityId > bestCommunity))) {
maxDeltaQ = deltaQ
bestCommunity = communityId
bestSigmaTot = sigmaTotal
}
})
// only allow changes from low to high communties on even cyces and
// high to low on odd cycles
if (louvainData.community != bestCommunity && ((even &&
louvainData.community > bestCommunity) || (!even &&
louvainData.community < bestCommunity))) {
//println(" "+vid+" SWITCHED from "+vdata.community+" to "+bestCommunity)
louvainData.community = bestCommunity
louvainData.communitySigmaTot = bestSigmaTot
louvainData.changed = true
}
else {
louvainData.changed = false
}
if (louvainData == null)
println("vdata is null: " + vid)
louvainData
})
}
def louvain(
sc: SparkContext,
graph: Graph[LouvainData, Long],
minProgress: Int = 1,
progressCounter: Int = 1): (Double, Graph[LouvainData, Long], Int) = {
var louvainGraph = graph.cache()
val graphWeight = louvainGraph.vertices.map(louvainVertex => {
val (vertexId, louvainData) = louvainVertex
louvainData.internalWeight + louvainData.nodeWeight
}).reduce(_ + _)
val totalGraphWeight = sc.broadcast(graphWeight)
println("totalEdgeWeight: " + totalGraphWeight.value)
// gather community information from each vertex's local neighborhood
var communityRDD =
louvainGraph.aggregateMessages(sendCommunityData, mergeCommunityMessages)
var activeMessages = communityRDD.count() //materializes the msgRDD
//and caches it in memory
var updated = 0L - minProgress
var even = false
var count = 0
val maxIter = 100000
var stop = 0
var updatedLastPhase = 0L
do {
count += 1
even = !even
// label each vertex with its best community based on neighboring
// community information
val labeledVertices = louvainVertJoin(louvainGraph, communityRDD,
totalGraphWeight, even).cache()
// calculate new sigma total value for each community (total weight
// of each community)
val communityUpdate = labeledVertices
.map({ case (vid, vdata) => (vdata.community, vdata.nodeWeight +
vdata.internalWeight)})
.reduceByKey(_ + _).cache()
// map each vertex ID to its updated community information
val communityMapping = labeledVertices
.map({ case (vid, vdata) => (vdata.community, vid)})
.join(communityUpdate)
.map({ case (community, (vid, sigmaTot)) => (vid, (community, sigmaTot))})
.cache()
// join the community labeled vertices with the updated community info
val updatedVertices = labeledVertices.join(communityMapping).map({
case (vertexId, (louvainData, communityTuple)) =>
val (community, communitySigmaTot) = communityTuple
louvainData.community = community
louvainData.communitySigmaTot = communitySigmaTot
(vertexId, louvainData)
}).cache()
updatedVertices.count()
labeledVertices.unpersist(blocking = false)
communityUpdate.unpersist(blocking = false)
communityMapping.unpersist(blocking = false)
val prevG = louvainGraph
louvainGraph = louvainGraph.outerJoinVertices(updatedVertices)((vid, old, newOpt) => newOpt.getOrElse(old))
louvainGraph.cache()
// gather community information from each vertex's local neighborhood
val oldMsgs = communityRDD
communityRDD = louvainGraph.aggregateMessages(sendCommunityData, mergeCommunityMessages).cache()
activeMessages = communityRDD.count() // materializes the graph
// by forcing computation
oldMsgs.unpersist(blocking = false)
updatedVertices.unpersist(blocking = false)
prevG.unpersistVertices(blocking = false)
// half of the communites can swtich on even cycles and the other half
// on odd cycles (to prevent deadlocks) so we only want to look for
// progess on odd cycles (after all vertcies have had a chance to
// move)
if (even) updated = 0
updated = updated + louvainGraph.vertices.filter(_._2.changed).count
if (!even) {
println(" # vertices moved: " + java.text.NumberFormat.getInstance().format(updated))
if (updated >= updatedLastPhase - minProgress) stop += 1
updatedLastPhase = updated
}
} while (stop <= progressCounter && (even || (updated > 0 && count < maxIter)))
println("\nCompleted in " + count + " cycles")
// Use each vertex's neighboring community data to calculate the
// global modularity of the graph
val newVertices =
louvainGraph.vertices.innerJoin(communityRDD)((vertexId, louvainData,
communityMap) => {
// sum the nodes internal weight and all of its edges that are in
// its community
val community = louvainData.community
var accumulatedInternalWeight = louvainData.internalWeight
val sigmaTot = louvainData.communitySigmaTot.toDouble
def accumulateTotalWeight(totalWeight: Long, item: ((Long, Long), Long)) = {
val ((communityId, sigmaTotal), communityEdgeWeight) = item
if (louvainData.community == communityId)
totalWeight + communityEdgeWeight
else
totalWeight
}
accumulatedInternalWeight = communityMap.foldLeft(accumulatedInternalWeight)(accumulateTotalWeight)
val M = totalGraphWeight.value
val k_i = louvainData.nodeWeight + louvainData.internalWeight
val q = (accumulatedInternalWeight.toDouble / M) - ((sigmaTot * k_i) / math.pow(M, 2))
//println(s"vid: $vid community: $community $q = ($k_i_in / $M) - ( ($sigmaTot * $k_i) / math.pow($M, 2) )")
if (q < 0)
0
else
q
})
val actualQ = newVertices.values.reduce(_ + _)
// return the modularity value of the graph along with the
// graph. vertices are labeled with their community
(actualQ, louvainGraph, count / 2)
}
def compressGraph(graph: Graph[LouvainData, Long], debug: Boolean = true): Graph[LouvainData, Long] = {
// aggregate the edge weights of self loops. edges with both src and dst in the same community.
// WARNING can not use graph.mapReduceTriplets because we are mapping to new vertexIds
val internalEdgeWeights = graph.triplets.flatMap(et => {
if (et.srcAttr.community == et.dstAttr.community) {
Iterator((et.srcAttr.community, 2 * et.attr)) // count the weight from both nodes
}
else Iterator.empty
}).reduceByKey(_ + _)
// aggregate the internal weights of all nodes in each community
val internalWeights = graph.vertices.values.map(vdata =>
(vdata.community, vdata.internalWeight))
.reduceByKey(_ + _)
// join internal weights and self edges to find new interal weight of each community
val newVertices = internalWeights.leftOuterJoin(internalEdgeWeights).map({ case (vid, (weight1, weight2Option)) =>
val weight2 = weight2Option.getOrElse(0L)
val state = new LouvainData()
state.community = vid
state.changed = false
state.communitySigmaTot = 0L
state.internalWeight = weight1 + weight2
state.nodeWeight = 0L
(vid, state)
}).cache()
// translate each vertex edge to a community edge
val edges = graph.triplets.flatMap(et => {
val src = math.min(et.srcAttr.community, et.dstAttr.community)
val dst = math.max(et.srcAttr.community, et.dstAttr.community)
if (src != dst) Iterator(new Edge(src, dst, et.attr))
else Iterator.empty
}).cache()
// generate a new graph where each community of the previous graph is
// now represented as a single vertex
val compressedGraph = Graph(newVertices, edges)
.partitionBy(PartitionStrategy.EdgePartition2D).groupEdges(_ + _)
// calculate the weighted degree of each node
val nodeWeights = compressedGraph.aggregateMessages(
(e:EdgeContext[LouvainData,Long,Long]) => {
e.sendToSrc(e.attr)
e.sendToDst(e.attr)
},
(e1: Long, e2: Long) => e1 + e2
)
// fill in the weighted degree of each node
// val louvainGraph = compressedGraph.joinVertices(nodeWeights)((vid,data,weight)=> {
val louvainGraph = compressedGraph.outerJoinVertices(nodeWeights)((vid, data, weightOption) => {
val weight = weightOption.getOrElse(0L)
data.communitySigmaTot = weight + data.internalWeight
data.nodeWeight = weight
data
}).cache()
louvainGraph.vertices.count()
louvainGraph.triplets.count() // materialize the graph
newVertices.unpersist(blocking = false)
edges.unpersist(blocking = false)
println("******************************************************")
println (louvainGraph.vertices.count())
louvainGraph
}
def saveLevel(
sc: SparkContext,
config: LouvainConfig,
level: Int,
qValues: Array[(Int, Double)],
graph: Graph[LouvainData, Long]) = {
val vertexSavePath = config.outputDir + "/level_" + level + "_vertices"
val edgeSavePath = config.outputDir + "/level_" + level + "_edges"
// save
graph.vertices.saveAsTextFile(vertexSavePath)
graph.edges.saveAsTextFile(edgeSavePath)
// overwrite the q values at each level
sc.parallelize(qValues, 1).saveAsTextFile(config.outputDir + "/qvalues_" + level)
}
//def run[VD: ClassTag](sc: SparkContext, config: LouvainConfig, graph: Graph[VD, Long]): Unit = {
def run[VD: ClassTag](sc: SparkContext, config: LouvainConfig): Unit = {
val edgeRDD = getEdgeRDD(sc, config)
val initialGraph = Graph.fromEdges(edgeRDD, None)
var louvainGraph = createLouvainGraph(initialGraph)
var compressionLevel = -1 // number of times the graph has been compressed
var q_modularityValue = -1.0 // current modularity value
var halt = false
var qValues: Array[(Int, Double)] = Array()
do {
compressionLevel += 1
println(s"\nStarting Louvain level $compressionLevel")
// label each vertex with its best community choice at this level of compression
val (currentQModularityValue, currentGraph, numberOfPasses) =
louvain(sc, louvainGraph, config.minimumCompressionProgress, config.progressCounter)
louvainGraph.unpersistVertices(blocking = false)
louvainGraph = currentGraph
println(s"qValue: $currentQModularityValue")
qValues = qValues :+ ((compressionLevel, currentQModularityValue))
saveLevel(sc, config, compressionLevel, qValues, louvainGraph)
// If modularity was increased by at least 0.001 compress the graph and repeat
// halt immediately if the community labeling took less than 3 passes
//println(s"if ($passes > 2 && $currentQ > $q + 0.001 )")
if (numberOfPasses > 2 && currentQModularityValue > q_modularityValue + 0.001) {
q_modularityValue = currentQModularityValue
louvainGraph = compressGraph(louvainGraph)
}
else {
halt = true
}
} while (!halt)
//finalSave(sc, compressionLevel, q_modularityValue, louvainGraph)
}
}
The code is taken from github https://github.com/athinggoingon/louvain-modularity.
Here is the example of input file, just first 10 lines. The graph is made from csv file, schema is : node1, node2, weight_of_the_edge
104,158,34.23767571520276
146,242,12.49338107205348
36,37,0.6821403413414481
28,286,2.5053934980726456
9,92,0.34412941554076487
222,252,10.502288293870677
235,282,0.25717021769814874
264,79,18.555996343792327
24,244,1.7094102023399587
231,75,21.698401383558213

Multiple identical keys from Python dict to JSON

I am trying to create JSON object in Python, and it works just fine despite the fact that I can't get multiple keys with the same name - but I need to do it.
Here's a function:
findings = AutoTree()
findings['report']['numberOfConditions'] = num_cond
if r == 'Mammography':
f_temp = df['Relevant findings'].values.tolist()[0:8]
f_list = [x for i, x in enumerate(f_temp) if i == f_temp.index(x)]
f_num_total = len(f_list)
f_rand = random.randrange(1, f_num_total + 1)
iter_params_mass = ['shape', 'margin', 'density']
for i in range(num_cond):
br = get_birad(row, 2, 7)
cond = camelCase(get_cond_name())
findings[cond]['biRad'] = br
for k in range(f_rand + 1):
f = camelCase(random.choice(f_list))
#f = 'mass'
if f == 'mass':
rep_temp = create_rep(iter_params_mass, row, f, r)
findings[cond][f] = rep_temp
"""I also have a lot elif conditions, and it just grabs parameters."""
report = json.dumps(findings)
print(report)
Output:
{
"report":{
"id":85,
"name":"Lydia",
"age":39,
"relevantModality":"Mammography",
"numberOfConditions":2
},
"ductEctasia":{
"biRad":"birad[1]",
"calcifications":[
{
"typicallyBenign":"Vascular",
"suspiciousMorphology":"Coarse heterogeneous",
"distribution":"Diffuse"
}
],
"lymphNodes":[
{
"lymphNodes":"Lymph nodes \u2013 axillary"
}
]
}
}
And I want to have multiple "lymphNodes" and "calcifications" objects. Is it possible? Maybe, you can suggest another way to create JSON object, not nested dictionaries? The problem is that I need to create object respectively to random parameter chosen from the database.

How to access latitude and longitude in a script with beautifulsoup?

I want to get latitude and longitude from a webpage using beautifulsoup but they are in a script:
//<![CDATA[
theForm.oldSubmit = theForm.submit;
theForm.submit = WebForm_SaveScrollPositionSubmit;
theForm.oldOnSubmit = theForm.onsubmit;
theForm.onsubmit = WebForm_SaveScrollPositionOnSubmit;
var GMapsProperties={};function getGMapElementById(mapId,GMapElementId){var _mapId=typeof(mapId)=='string'? mapId : mapId.getDiv().id;var overlayArray=GMapsProperties[_mapId]['overlayArray'];for(var i=0;i < overlayArray.length;i++){if(overlayArray[i][0]==GMapElementId){return overlayArray[i][1];}}return null;}function removeGMapElementById(mapId,GMapElementId){var _mapId=typeof(mapId)=='string'? mapId : mapId.getDiv().id;var overlayArray=GMapsProperties[_mapId]['overlayArray'];for(var i=0;i < overlayArray.length;i++){if(overlayArray[i][0]==GMapElementId){overlayArray.splice(i,1);return;}}}function closeWindows(mapId){for(var i=0;i<GMapsProperties[mapId]['windowArray'].length;i++){GMapsProperties[mapId]['windowArray'][i][1].close();}}var _sg=_sg ||{};_sg.cs=(function(){var p={};p.createMarker=function(opt,id){var m=new google.maps.Marker(opt);if(id && m.getMap())GMapsProperties[m.getMap().getDiv().id]['overlayArray'].push([id,m]);return m;};p.createPolyline=function(opt,id){var m=new google.maps.Polyline(opt);if(id && m.getMap())GMapsProperties[m.getMap().getDiv().id]['overlayArray'].push([id,m]);return m;};p.createPolygon=function(opt,id){var m=new google.maps.Polygon(opt);if(id && m.getMap())GMapsProperties[m.getMap().getDiv().id]['overlayArray'].push([id,m]);return m;};return p;})();function addEvent(el,ev,fn){if(el.addEventListener)el.addEventListener(ev,fn,false);else if(el.attachEvent)el.attachEvent('on'+ev,fn);else el['on'+ev]=fn;}GMapsProperties['subgurim_GoogleMapControl'] = {}; var GMapsProperties_subgurim_GoogleMapControl = GMapsProperties['subgurim_GoogleMapControl']; GMapsProperties_subgurim_GoogleMapControl['enableStore'] = false; GMapsProperties_subgurim_GoogleMapControl['overlayArray'] = new Array(); GMapsProperties_subgurim_GoogleMapControl['windowArray'] = new Array();var subgurim_GoogleMapControl;function load_subgurim_GoogleMapControl(){var mapDOM = document.getElementById('subgurim_GoogleMapControl'); if (!mapDOM) return;subgurim_GoogleMapControl = new google.maps.Map(mapDOM);function subgurim_GoogleMapControlupdateValues(eventId,value){var item=document.getElementById('subgurim_GoogleMapControl_Event'+eventId);item.value=value;}google.maps.event.addListener(subgurim_GoogleMapControl, 'addoverlay', function(overlay) { if(overlay) { GMapsProperties['subgurim_GoogleMapControl']['overlayArray'].push(overlay); } });google.maps.event.addListener(subgurim_GoogleMapControl, 'clearoverlays', function() { GMapsProperties['subgurim_GoogleMapControl']['overlayArray'] = new Array(); });google.maps.event.addListener(subgurim_GoogleMapControl, 'removeoverlay', function(overlay) { removeGMapElementById('subgurim_GoogleMapControl',overlay.id) });google.maps.event.addListener(subgurim_GoogleMapControl, 'maptypeid_changed', function() { var tipo = subgurim_GoogleMapControl.getMapTypeId(); subgurim_GoogleMapControlupdateValues('0', tipo);});google.maps.event.addListener(subgurim_GoogleMapControl, 'dragend', function() { var lat = subgurim_GoogleMapControl.getCenter().lat(); var lng = subgurim_GoogleMapControl.getCenter().lng(); subgurim_GoogleMapControlupdateValues('2', lat+','+lng); });google.maps.event.addListener(subgurim_GoogleMapControl, 'zoom_changed', function() { subgurim_GoogleMapControlupdateValues('1', subgurim_GoogleMapControl.getZoom()); });subgurim_GoogleMapControl.setOptions({center:new google.maps.LatLng(35.6783546483511,51.4196634292603),disableDefaultUI:true,keyboardShortcuts:false,mapTypeControl:false,mapTypeId:google.maps.MapTypeId.ROADMAP,scrollwheel:false,zoom:14});var marker_subgurim_920435_=_sg.cs.createMarker({position:new google.maps.LatLng(35.6783546483511,51.4196634292603),clickable:true,draggable:false,map:subgurim_GoogleMapControl,raiseOnDrag:true,visible:true,icon:'/images/markers/Site/Tourism/vase.png'}, 'marker_subgurim_920435_');}addEvent(window,'load',load_subgurim_GoogleMapControl);//]]>
and I want information in this part:
{position:new google.maps.LatLng(35.6783546483511,51.4196634292603)
is it possible to access that information by using beautifulsoup or any other web-scraper?
Use Regular expression for this purpose.
import re
#Suppose script is stored in variable script_file
m = re.search('LatLng(\(.+?\))', script_file)
latlng = m.group(1)
latlng = eval(latlng)
print(latlng) #(35.6783546483511, 51.4196634292603)
import re
s = 'position:new google.maps.LatLng(35.6783546483511,51.4196634292603)'
lat, lng = map(float, re.search(r'\(([^,]+),([^)]+)', s).groups())
If you want to get Latitude and Longitude separately, use regex expression in this way:
import re
s = 'position:new google.maps.LatLng(35.6783546483511,51.4196634292603)'
Lat, Lng = map(float, re.search(r'LatLng\(([\d.]+),([\d.]+)\)',s).groups())

Python weave to speed up our code

We would really appreciate any kind of help, because we are driving crazy with our program making it faster using C language.
The values obtained don't change, always are 0,0,0,0
Here is the code, running in Linux:
from scipy import weave
pasa = 0
coorX = -11.8
coorY = -7.9
INC=0.01296
##def weave_update():
code="""
int i,j, pasa;
double coorX, coorY,INC;
for (i=0; i < 1296;i++){
yminf = coorY + INC*(i);
ymaxf = yminf + INC;
for (j=0; j < 1936;j++){
xminc = coorX + INC*(j);
xmaxc = xminc + INC;
pasa = 1;
break;
}
if (pasa == 1){
break;
}
}
"""
weave.inline(code,['yminf','xminc','xmaxc','ymaxf'],type_converters=weave.converters.blitz,compiler='gcc')
print yminf,xminc,xmaxc,ymaxf
Looks like two issues. First, you need to pass in all of the variables that the C code needs access to from python. So, your inline call needs to be:
weave.inline(code, ['coorX','coorY','INC'])
Secondly, you need to return the values you want from the weave code, because modifying them in C doesn't affect their value in Python. Here's one way to do it:
py::tuple ret(4);
ret[0] = yminf;
ret[1] = xminc;
ret[2] = xmaxc;
ret[3] = ymaxf;
return_val = ret;
With these modifications, the following file seems to work correctly:
from scipy import weave
coorX = -11.8
coorY = -7.9
INC = 0.01296
code="""
int i,j, pasa = 0;
double yminf,xminc,xmaxc,ymaxf;
for (i=0; i < 1296;i++){
yminf = coorY + INC*(i);
ymaxf = yminf + INC;
for (j=0; j < 1936;j++){
xminc = coorX + INC*(j);
xmaxc = xminc + INC;
pasa = 1;
break;
}
if (pasa == 1){
break;
}
}
py::tuple ret(4);
ret[0] = yminf;
ret[1] = xminc;
ret[2] = xmaxc;
ret[3] = ymaxf;
return_val = ret;
"""
yminf,xminc,xmaxc,ymaxf = weave.inline(code,['coorX','coorY','INC'])
print yminf,xminc,xmaxc,ymaxf

Categories