DepthKV: Layer-Dependent KV Cache Pruning for Long-Context LLM Inference — Zahra Dehghanighobadi, Asja Fischer | Kutubxona