Spark 3.5 is released, but...
The answer may be as old as Spark 1.5.0:
datediff.
datediff(col_name, '1000') will return an integer difference of days from 1000-01-01 to col_name.
As the first argument, it accepts dates, timestamps and even strings.
As the second, it even accepts 1000.
The answer
Date difference in days - depending on the data type of the order column:
date
Spark 3.5+
.orderBy(F.unix_date("col_name")).rangeBetween(-7, 0)
Spark 3.1+
.orderBy(F.expr("unix_date(col_name)")).rangeBetween(-7, 0)
Spark 2.1+
.orderBy(F.expr("datediff(col_name, '1000')")).rangeBetween(-7, 0)
timestamp
long - UNIX time in microseconds (e.g. 1672534861000000)
long - UNIX time in milliseconds (e.g. 1672534861000)
long - UNIX time in seconds (e.g. 1672534861)
long in format yyyyMMdd
Spark 3.5+
.orderBy(F.unix_date(F.to_date("col_name", 'yyyyMMdd'))).rangeBetween(-7, 0)
Spark 3.3+
.orderBy(F.expr("unix_date(to_date(col_name, 'yyyyMMdd'))")).rangeBetween(-7, 0)
Spark 3.1+
.orderBy(F.expr("unix_date(to_date(cast(col_name as string), 'yyyyMMdd'))")).rangeBetween(-7, 0)
Spark 2.2+
.orderBy(F.expr("datediff(to_date(cast(col_name as string), 'yyyyMMdd'), '1000')")).rangeBetween(-7, 0)
Spark 2.1+
.orderBy(F.unix_timestamp(F.col("col_name").cast('string'), 'yyyyMMdd') / 86400).rangeBetween(-7, 0)
string in date format of 'yyyy-MM-dd'
Spark 3.5+
.orderBy(F.unix_date(F.to_date("col_name"))).rangeBetween(-7, 0)
Spark 3.1+
.orderBy(F.expr("unix_date(to_date(col_name))")).rangeBetween(-7, 0)
Spark 2.1+
.orderBy(F.expr("datediff(col_name, '1000')")).rangeBetween(-7, 0)
string in other date format (e.g. 'MM-dd-yyyy')
Spark 3.5+
.orderBy(F.unix_date(F.to_date("col_name", 'MM-dd-yyyy'))).rangeBetween(-7, 0)
Spark 3.1+
.orderBy(F.expr("unix_date(to_date(col_name, 'MM-dd-yyyy'))")).rangeBetween(-7, 0)
Spark 2.2+
.orderBy(F.expr("datediff(to_date(col_name, 'MM-dd-yyyy'), '1000')")).rangeBetween(-7, 0)
Spark 2.1+
.orderBy(F.unix_timestamp("col_name", 'MM-dd-yyyy') / 86400).rangeBetween(-7, 0)
string in timestamp format of 'yyyy-MM-dd HH:mm:ss'
string in other timestamp format (e.g. 'MM-dd-yyyy HH:mm:ss')
Different test cases in Spark 3.4+ can be created with this:
ints = F.expr("sequence(1, 10)").alias('ints')
dates = (
date := F.expr("sequence(to_date('2000-01-01'), to_date('2000-01-10'))")
# timestamp := F.expr("sequence(to_timestamp('2000-01-01'), to_timestamp('2000-01-10'))")
# long_micro := F.expr("sequence(946684800000000, 947462400000000, 86400000000)")
# long_milli := F.expr("sequence(946684800000, 947462400000, 86400000)")
# long_secs := F.expr("sequence(946684800, 947462400, 86400)")
# long_yyyyMMdd := F.expr("sequence(20000101, 20000110)")
# str_unformatted_date := F.expr("transform(sequence(to_date('2000-01-01'), to_date('2000-01-10')), x -> string(x))")
# str_formatted_date := F.expr("transform(sequence(to_date('2000-01-01'), to_date('2000-01-10')), x -> date_format(x, 'MM-dd-yyyy'))")
# str_unformatted_ts := F.expr("transform(sequence(to_timestamp('2000-01-01'), to_timestamp('2000-01-10')), x -> string(x))")
# str_formatted_ts := F.expr("transform(sequence(to_date('2000-01-01'), to_date('2000-01-10')), x -> date_format(x, 'MM-dd-yyyy HH:mm:ss'))")
).alias('col_name')
df = spark.range(1).select(F.inline(F.arrays_zip(ints, dates)))