scala - How to read probability Vector in Spark Dataframe LogisticRegression output -


i trying read first probability on logistic regression output may perform decile binning on it.

below test code emulates outputs vector.

    val r = sqlcontext.createdataframe(seq(("jane", vectors.dense(.98)),         ("tom", vectors.dense(.34)),         ("nancy", vectors.dense(.93)),         ("tim", vectors.dense(.02)),         ("larry", vectors.dense(.033)),         ("lana", vectors.dense(.85)),         ("jack", vectors.dense(.84)),         ("john", vectors.dense(.09)),         ("jill", vectors.dense(.12)),         ("mike", vectors.dense(.21)),         ("jason", vectors.dense(.31)),         ("roger", vectors.dense(.76)),         ("ed", vectors.dense(.77)),         ("alan", vectors.dense(.64)),         ("ryan", vectors.dense(.52)),         ("ted", vectors.dense(.66)),         ("paul", vectors.dense(.67)),         ("brian", vectors.dense(.68)),         ("jeff", vectors.dense(.05)))).todf(csmastercustomerid, mlprobability)     var result = r.select(csmastercustomerid, mlprobability)     val schema = structtype(seq(structfield(csmastercustomerid, stringtype, false), structfield(mlprobability, doubletype, true)))     result = sqlcontext.createdataframe(result.map((r: row) => {         r match {             case row(mcid: string, probability: vector) =>                 rowfactory.create(mcid, probability(0))         }     }), schema) 

this fails compile saying:

<console>:56: error: type mismatch; found   : double required: object note: implicit exists scala.double => java.lang.double, methods inherited object rendered ambiguous.  avoid blanket implicit convert scala.double anyref. may wish use type ascription: `x: java.lang.double`.                        rowfactory.create(mcid, probability(0)) 

any suggestions fix or approach?