`
从此醉
  • 浏览: 1044656 次
  • 性别: Icon_minigender_1
  • 来自: US
社区版块
存档分类
最新评论

android中图片的三级cache策略(内存、文件、网络)之二:内存缓存策略

 
阅读更多

前言

记得很久之前我写了一篇banner的文章,好多朋友找我要代码,并要我开放banner中使用的图片管理工厂-ImageManager。如果想很好地理解下面的故事,请参看我半年前写的两篇博文:android中图片的三级cache策略(内存、文件、网络) 一android中左右滑屏的实现(广告位banner组件)。当时没有发上来是由于如下几点原因:首先代码较多,其次当时写的时候也参考了网络上存在的三级cache策略(大同小异),并且采用了Android项目中开源的LruCache页面淘汰算法(近期最少使用算法),还有一点就是这是实际项目使用的代码,不便直接开放,但是现在我决定把它稍作修改后开放给大家。这里我想说说那个banner,平心而论,banner的代码很多,如果采用ViewPager之类的则可以减少不少代码,但是我更看重banner的实现思想以及它的封装和事件传递,在自定义控件的封装和架构上,我到现在还觉得banner是及其成功的,尤其是banner和ImageManager结合以后,整个功能浑然天成,超高内聚,使用起来及其方便,最少只需要两行代码,你不需要导入xml,也不需要处理Json拉取策略,因为相关业务层都被封装在了banner内部,对外只保留很少的几个接口,只要实现它就能和banner内部进行交互。下面我将要介绍三级cache策略之二:内存缓存策略。

内存缓存策略

当有一个图片要去从网络下载的时候,我们并不会直接去从网络下载,因为在这个时代,用户的流量是宝贵的,耗流量的应用是不会得到用户的青睐的。那我们该怎么办呢?这样,我们会先从内存缓存中去查找是否有该图片,如果没有就去文件缓存中查找是否有该图片,如果还没有,我们就从网络下载图片。本博文的侧重点是如何做内存缓存,内存缓存的查找策略是:先从强引用缓存中查找,如果没有再从软引用缓存中查找,如果在软引用缓存中找到了,就把它移入强引用缓存;如果强引用缓存满了,就会根据Lru算法把某些图片移入软引用缓存,如果软引用缓存也满了,最早的软引用就会被删除。这里,我有必要说明下几个概念:强引用、软引用、弱引用、Lru。

强引用:就是直接引用一个对象,一般的对象引用均是强引用

软引用:引用一个对象,当内存不足并且除了我们的引用之外没有其他地方引用此对象的情况 下,该对象会被gc回收

弱引用:引用一个对象,当除了我们的引用之外没有其他地方引用此对象的情况下,只要gc被调用,它就会被回收(请注意它和软引用的区别)

LruLeast Recently Used 近期最少使用算法,是一种页面置换算法,其思想是在缓存的页面数目固定的情况下,那些最近使用次数最少的页面将被移出,对于我们的内存缓存来说,强引用缓存大小固定为4M,如果当缓存的图片大于4M的时候,有些图片就会被从强引用缓存中删除,哪些图片会被删除呢,就是那些近期使用次数最少的图片。

代码

public class ImageMemoryCache {
    /**
     * 从内存读取数据速度是最快的,为了更大限度使用内存,这里使用了两层缓存。
     *  强引用缓存不会轻易被回收,用来保存常用数据,不常用的转入软引用缓存。
     */
    private static final String TAG = "ImageMemoryCache";

    private static LruCache<String, Bitmap> mLruCache; // 强引用缓存

    private static LinkedHashMap<String, SoftReference<Bitmap>> mSoftCache; // 软引用缓存

    private static final int LRU_CACHE_SIZE = 4 * 1024 * 1024; // 强引用缓存容量:4MB

    private static final int SOFT_CACHE_NUM = 20; // 软引用缓存个数

    // 在这里分别初始化强引用缓存和弱引用缓存
    public ImageMemoryCache() {
        mLruCache = new LruCache<String, Bitmap>(LRU_CACHE_SIZE) {
            @Override
            // sizeOf返回为单个hashmap value的大小
            protected int sizeOf(String key, Bitmap value) {
                if (value != null)
                    return value.getRowBytes() * value.getHeight();
                else
                    return 0;
            }

            @Override
            protected void entryRemoved(boolean evicted, String key,
                    Bitmap oldValue, Bitmap newValue) {
                if (oldValue != null) {
                    // 强引用缓存容量满的时候,会根据LRU算法把最近没有被使用的图片转入此软引用缓存
                    Logger.d(TAG, "LruCache is full,move to SoftRefernceCache");
                    mSoftCache.put(key, new SoftReference<Bitmap>(oldValue));
                }
            }
        };

        mSoftCache = new LinkedHashMap<String, SoftReference<Bitmap>>(
                SOFT_CACHE_NUM, 0.75f, true) {
            private static final long serialVersionUID = 1L;

            /**
             * 当软引用数量大于20的时候,最旧的软引用将会被从链式哈希表中移出
             */
            @Override
            protected boolean removeEldestEntry(
                    Entry<String, SoftReference<Bitmap>> eldest) {
                if (size() > SOFT_CACHE_NUM) {
                    Logger.d(TAG, "should remove the eldest from SoftReference");
                    return true;
                }
                return false;
            }
        };
    }

    /**
     * 从缓存中获取图片
     */
    public Bitmap getBitmapFromMemory(String url) {
        Bitmap bitmap;

        // 先从强引用缓存中获取
        synchronized (mLruCache) {
            bitmap = mLruCache.get(url);
            if (bitmap != null) {
                // 如果找到的话,把元素移到LinkedHashMap的最前面,从而保证在LRU算法中是最后被删除
                mLruCache.remove(url);
                mLruCache.put(url, bitmap);
                Logger.d(TAG, "get bmp from LruCache,url=" + url);
                return bitmap;
            }
        }

        // 如果强引用缓存中找不到,到软引用缓存中找,找到后就把它从软引用中移到强引用缓存中
        synchronized (mSoftCache) {
            SoftReference<Bitmap> bitmapReference = mSoftCache.get(url);
            if (bitmapReference != null) {
                bitmap = bitmapReference.get();
                if (bitmap != null) {
                    // 将图片移回LruCache
                    mLruCache.put(url, bitmap);
                    mSoftCache.remove(url);
                    Logger.d(TAG, "get bmp from SoftReferenceCache, url=" + url);
                    return bitmap;
                } else {
                    mSoftCache.remove(url);
                }
            }
        }
        return null;
    }

    /**
     * 添加图片到缓存
     */
    public void addBitmapToMemory(String url, Bitmap bitmap) {
        if (bitmap != null) {
            synchronized (mLruCache) {
                mLruCache.put(url, bitmap);
            }
        }
    }

    public void clearCache() {
        mSoftCache.clear();
    }
}

另外,给出LruCache供大家参考:

/*
 * Copyright (C) 2011 The Android Open Source Project
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

/**
 * Cache保存一个强引用来限制内容数量,每当Item被访问的时候,此Item就会移动到队列的头部。
 * 当cache已满的时候加入新的item时,在队列尾部的item会被回收。
 * 
 * 如果你cache的某个值需要明确释放,重写entryRemoved()
 * 
 * 如果key相对应的item丢掉啦,重写create().这简化了调用代码,即使丢失了也总会返回。
 * 
 * 默认cache大小是测量的item的数量,重写sizeof计算不同item的大小。
 *  
 * <pre>   {@code
 *   int cacheSize = 4 * 1024 * 1024; // 4MiB
 *   LruCache<String, Bitmap> bitmapCache = new LruCache<String, Bitmap>(cacheSize) {
 *       protected int sizeOf(String key, Bitmap value) {
 *           return value.getByteCount();
 *       }
 *   }}</pre>
 *
 * <p>This class is thread-safe. Perform multiple cache operations atomically by
 * synchronizing on the cache: <pre>   {@code
 *   synchronized (cache) {
 *     if (cache.get(key) == null) {
 *         cache.put(key, value);
 *     }
 *   }}</pre>
 *
 * 不允许key或者value为null
 *  当get(),put(),remove()返回值为null时,key相应的项不在cache中
 */
public class LruCache<K, V> {
    private final LinkedHashMap<K, V> map;

    /** Size of this cache in units. Not necessarily the number of elements. */
    private int size;//已经存储的大小
    private int maxSize;//规定的最大存储空间

    private int putCount;//put的次数
    private int createCount;//create的次数
    private int evictionCount; //回收的次数
    private int hitCount;//命中的次数
    private int missCount;//丢失的次数

    /**
     * @param maxSize for caches that do not override {@link #sizeOf}, this is
     *     the maximum number of entries in the cache. For all other caches,
     *     this is the maximum sum of the sizes of the entries in this cache.
     */
    public LruCache(int maxSize) {
        if (maxSize <= 0) {
            throw new IllegalArgumentException("maxSize <= 0");
        }
        this.maxSize = maxSize;
        this.map = new LinkedHashMap<K, V>(0, 0.75f, true);
    }

    /**
     *通过key返回相应的item,或者创建返回相应的item。相应的item会移动到队列的头部,
     * 如果item的value没有被cache或者不能被创建,则返回null。
     */
    public final V get(K key) {
        if (key == null) {
            throw new NullPointerException("key == null");
        }

        V mapValue;
        synchronized (this) {
            mapValue = map.get(key);
            if (mapValue != null) {
                hitCount++;
                return mapValue;
            }
            missCount++;
        }

        /*
         * Attempt to create a value. This may take a long time, and the map
         * may be different when create() returns. If a conflicting value was
         * added to the map while create() was working, we leave that value in
         * the map and release the created value.
         */

        V createdValue = create(key);
        if (createdValue == null) {
            return null;
        }

        synchronized (this) {
            createCount++;
            mapValue = map.put(key, createdValue);

            if (mapValue != null) {
                // There was a conflict so undo that last put
                map.put(key, mapValue);
            } else {
                size += safeSizeOf(key, createdValue);
            }
        }

        if (mapValue != null) {
            entryRemoved(false, key, createdValue, mapValue);
            return mapValue;
        } else {
            trimToSize(maxSize);
            return createdValue;
        }
    }

    /**
     * Caches {@code value} for {@code key}. The value is moved to the head of
     * the queue.
     *
     * @return the previous value mapped by {@code key}.
     */
    public final V put(K key, V value) {
        if (key == null || value == null) {
            throw new NullPointerException("key == null || value == null");
        }

        V previous;
        synchronized (this) {
            putCount++;
            size += safeSizeOf(key, value);
            previous = map.put(key, value);
            if (previous != null) {
                size -= safeSizeOf(key, previous);
            }
        }

        if (previous != null) {
            entryRemoved(false, key, previous, value);
        }

        trimToSize(maxSize);
        return previous;
    }

    /**
     * @param maxSize the maximum size of the cache before returning. May be -1
     *     to evict even 0-sized elements.
     */
    private void trimToSize(int maxSize) {
        while (true) {
            K key;
            V value;
            synchronized (this) {
                if (size < 0 || (map.isEmpty() && size != 0)) {
                    throw new IllegalStateException(getClass().getName()
                            + ".sizeOf() is reporting inconsistent results!");
                }

                if (size <= maxSize) {
                    break;
                }
            
                /*
                 * Map.Entry<K, V> toEvict = map.eldest();               
                 */
                //modify by echy
                
                Iterator<Entry<K, V>> iter = map.entrySet().iterator(); 
                Map.Entry<K, V> toEvict = null;
                while (iter.hasNext()) 
    			{
    				
                	toEvict = (Entry<K, V>) iter.next();
    				break;
    			}
                
                
                if (toEvict == null) {
                    break;
                }
                
                key = toEvict.getKey();
                value = toEvict.getValue();
                
                
                map.remove(key);
                size -= safeSizeOf(key, value);
                evictionCount++;
            }

            entryRemoved(true, key, value, null);
        }
    }

    /**
     * Removes the entry for {@code key} if it exists.
     *
     * @return the previous value mapped by {@code key}.
     */
    public final V remove(K key) {
        if (key == null) {
            throw new NullPointerException("key == null");
        }

        V previous;
        synchronized (this) {
            previous = map.remove(key);
            if (previous != null) {
                size -= safeSizeOf(key, previous);
            }
        }

        if (previous != null) {
            entryRemoved(false, key, previous, null);
        }

        return previous;
    }

    /**
     * Called for entries that have been evicted or removed. This method is
     * invoked when a value is evicted to make space, removed by a call to
     * {@link #remove}, or replaced by a call to {@link #put}. The default
     * implementation does nothing.
     *
     * <p>The method is called without synchronization: other threads may
     * access the cache while this method is executing.
     *
     * @param evicted true if the entry is being removed to make space, false
     *     if the removal was caused by a {@link #put} or {@link #remove}.
     * @param newValue the new value for {@code key}, if it exists. If non-null,
     *     this removal was caused by a {@link #put}. Otherwise it was caused by
     *     an eviction or a {@link #remove}.
     */
    protected void entryRemoved(boolean evicted, K key, V oldValue, V newValue) {}

    /**
     * Called after a cache miss to compute a value for the corresponding key.
     * Returns the computed value or null if no value can be computed. The
     * default implementation returns null.
     *
     * <p>The method is called without synchronization: other threads may
     * access the cache while this method is executing.
     *
     * <p>If a value for {@code key} exists in the cache when this method
     * returns, the created value will be released with {@link #entryRemoved}
     * and discarded. This can occur when multiple threads request the same key
     * at the same time (causing multiple values to be created), or when one
     * thread calls {@link #put} while another is creating a value for the same
     * key.
     */
    protected V create(K key) {
        return null;
    }

    private int safeSizeOf(K key, V value) {
        int result = sizeOf(key, value);
        if (result < 0) {
            throw new IllegalStateException("Negative size: " + key + "=" + value);
        }
        return result;
    }

    /**
     * Returns the size of the entry for {@code key} and {@code value} in
     * user-defined units.  The default implementation returns 1 so that size
     * is the number of entries and max size is the maximum number of entries.
     *
     * <p>An entry's size must not change while it is in the cache.
     */
    protected int sizeOf(K key, V value) {
        return 1;
    }

    /**
     * Clear the cache, calling {@link #entryRemoved} on each removed entry.
     */
    public final void evictAll() {
        trimToSize(-1); // -1 will evict 0-sized elements
    }

    /**
     * For caches that do not override {@link #sizeOf}, this returns the number
     * of entries in the cache. For all other caches, this returns the sum of
     * the sizes of the entries in this cache.
     */
    public synchronized final int size() {
        return size;
    }

    /**
     * For caches that do not override {@link #sizeOf}, this returns the maximum
     * number of entries in the cache. For all other caches, this returns the
     * maximum sum of the sizes of the entries in this cache.
     */
    public synchronized final int maxSize() {
        return maxSize;
    }

    /**
     * Returns the number of times {@link #get} returned a value that was
     * already present in the cache.
     */
    public synchronized final int hitCount() {
        return hitCount;
    }

    /**
     * Returns the number of times {@link #get} returned null or required a new
     * value to be created.
     */
    public synchronized final int missCount() {
        return missCount;
    }

    /**
     * Returns the number of times {@link #create(Object)} returned a value.
     */
    public synchronized final int createCount() {
        return createCount;
    }

    /**
     * Returns the number of times {@link #put} was called.
     */
    public synchronized final int putCount() {
        return putCount;
    }

    /**
     * Returns the number of values that have been evicted.
     */
    public synchronized final int evictionCount() {
        return evictionCount;
    }

    /**
     * Returns a copy of the current contents of the cache, ordered from least
     * recently accessed to most recently accessed.
     */
    public synchronized final Map<K, V> snapshot() {
        return new LinkedHashMap<K, V>(map);
    }

    @Override public synchronized final String toString() {
        int accesses = hitCount + missCount;
        int hitPercent = accesses != 0 ? (100 * hitCount / accesses) : 0;
        return String.format("LruCache[maxSize=%d,hits=%d,misses=%d,hitRate=%d%%]",
                maxSize, hitCount, missCount, hitPercent);
    }
}
分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics