1

I ran regexp_replace command on a Pyspark dataframe and after that the datatype of all data changed to String.Why is it so ?

Below is my table before using regex_replace

root
 |-- account_id: long (nullable = true)
 |-- credit_card_limit: long (nullable = true)
 |-- credit_card_number: long (nullable = true)
 |-- first_name: string (nullable = true)
 |-- last_name: string (nullable = true)
 |-- phone_number: long (nullable = true)
 |-- amount: long (nullable = true)
 |-- date: string (nullable = true)
 |-- shop: string (nullable = true)
 |-- transaction_code: string (nullable = true)

Schema after applying regexp_replace

root
 |-- date_type: date (nullable = true)
 |-- c_phone_number: string (nullable = true)
 |-- c_account_id: string (nullable = true)
 |-- c_credit_card_limit: string (nullable = true)
 |-- c_credit_card_number: string (nullable = true)
 |-- c_amount: string (nullable = true)
 |-- c_full_name: string (nullable = true)
 |-- c_transaction_code: string (nullable = true)
 |-- c_shop: string (nullable = true)

The code I used:

df=df.withColumn('c_phone_number',regexp_replace("phone_number","[^0-9]","")).drop('phone_number')
df=df.withColumn('c_account_id',regexp_replace("account_id","[^0-9]","")).drop('account_id')
df=df.withColumn('c_credit_card_limit',regexp_replace("credit_card_limit","[^0-9]","")).drop('credit_card_limit')
df=df.withColumn('c_credit_card_number',regexp_replace("credit_card_number","[^0-9]","")).drop('credit_card_number')
df=df.withColumn('c_amount',regexp_replace("amount","[^0-9 ]","")).drop('amount')
df=df.withColumn('c_full_name',regexp_replace("full_name","[^a-zA-Z ]","")).drop('full_name')
df=df.withColumn('c_transaction_code',regexp_replace("transaction_code","[^a-zA-Z]","")).drop('transaction_code')
df=df.withColumn('c_shop',regexp_replace("shop","[^a-zA-Z ]","")).drop('shop')

Why is this happening ? Is there a way to convert this to its original datatype or should I use cast again ?

1 Answer 1

1

You may wanted to look at the code from spark git for regexp_replace-

override def nullSafeEval(s: Any, p: Any, r: Any): Any = {
    if (!p.equals(lastRegex)) {
      // regex value changed
      lastRegex = p.asInstanceOf[UTF8String].clone()
      pattern = Pattern.compile(lastRegex.toString)
    }
    if (!r.equals(lastReplacementInUTF8)) {
      // replacement string changed
      lastReplacementInUTF8 = r.asInstanceOf[UTF8String].clone()
      lastReplacement = lastReplacementInUTF8.toString
    }
    val m = pattern.matcher(s.toString())
    result.delete(0, result.length())

    while (m.find) {
      m.appendReplacement(result, lastReplacement)
    }
    m.appendTail(result)

    UTF8String.fromString(result.toString)
  }
  1. the above code accepts the expression as Any and then call toString() on it
  2. At last, it is converting the result again in toString
UTF8String.fromString(result.toString)

ref - spark-git

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.