I have applied a groupby and calculating the standard deviation for two features in pyspark dataframe
from pyspark.sql import functions as f val1 = [('a',20,100),('a',100,100),('a',50,100),('b',0,100),('b',0,100),('c',0,0),('c',0,50),('c',0,100),('c',0,20)] cols = ['group','val1','val2'] tf = spark.createDataFrame(val1, cols) tf.show() tf.groupby('group').agg(f.stddev(['val1','val2']).alias('val1_std','val2_std'))
but it is giving me following error
TypeError: _() takes 1 positional argument but 2 were given
How to perform it in pyspark?