Discussion:
[Intel-gfx] [PATCH 001/190] drm: Release driver references to handle before making it available again
Chris Wilson
2016-01-11 09:16:12 UTC
Permalink
When userspace closes a handle, we remove it from the file->object_idr
and then tell the driver to drop its references to that file/handle.
However, as the file/handle is already available again for reuse, it may
be reallocated back to userspace and active on a new object before the
driver has had a chance to drop the old file/handle references.

Whilst calling back into the driver, we have to drop the
file->table_lock spinlock and so to prevent reusing the closed handle we
mark that handle as stale in the idr, perform the callback and then
remove the handle. We set the stale handle to point to the NULL object,
then any idr_find() whilst the driver is removing the handle will return
NULL, just as if the handle is already removed from idr.

v2: Use NULL rather than an ERR_PTR to avoid having to adjust callers.
idr_alloc() tracks existing handles using an internal bitmap, so we are
free to use the NULL object as our stale identifier.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: dri-***@lists.freedesktop.org
Cc: David Airlie <***@linux.ie>
Cc: Daniel Vetter <***@intel.com>
Cc: Rob Clark <***@gmail.com>
Cc: Ville Syrjälä <***@linux.intel.com>
Cc: Thierry Reding <***@nvidia.com>
---
drivers/gpu/drm/drm_gem.c | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 2e8c77e71e1f..d1909d1a1eb4 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -294,18 +294,21 @@ drm_gem_handle_delete(struct drm_file *filp, u32 handle)
spin_lock(&filp->table_lock);

/* Check if we currently have a reference on the object */
- obj = idr_find(&filp->object_idr, handle);
- if (obj == NULL) {
+ obj = idr_replace(&filp->object_idr, NULL, handle);
+ if (IS_ERR(obj)) {
spin_unlock(&filp->table_lock);
return -EINVAL;
}
dev = obj->dev;
+ spin_unlock(&filp->table_lock);

/* Release reference and decrement refcount. */
+ drm_gem_object_release_handle(handle, obj, filp);
+
+ spin_lock(&filp->table_lock);
idr_remove(&filp->object_idr, handle);
spin_unlock(&filp->table_lock);

- drm_gem_object_release_handle(handle, obj, filp);
return 0;
}
EXPORT_SYMBOL(drm_gem_handle_delete);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:13 UTC
Permalink
As paranoia, we want to ensure that the CPU's PTEs have been revoked for
the object before we return from i915_gem_release_mmap(). This allows us
to rely on there being no outstanding memory accesses and guarantees
serialisation of the code against concurrent access just by calling
i915_gem_release_mmap().

v2: Reduce the mb() into a wmb() following the revoke.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <***@linux.intel.com>
Cc: "Goel, Akash" <***@intel.com
Cc: Daniel Vetter <***@ffwll.ch>
Reviewed-by: Daniel Vetter <***@ffwll.ch>
---
drivers/gpu/drm/i915/i915_gem.c | 13 ++++++++++---
1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 6c60e04fc09c..3ab529669448 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1962,11 +1962,21 @@ out:
void
i915_gem_release_mmap(struct drm_i915_gem_object *obj)
{
+ /* Serialisation between user GTT access and our code depends upon
+ * revoking the CPU's PTE whilst the mutex is held. The next user
+ * pagefault then has to wait until we release the mutex.
+ */
+ lockdep_assert_held(&obj->base.dev->struct_mutex);
+
if (!obj->fault_mappable)
return;

drm_vma_node_unmap(&obj->base.vma_node,
obj->base.dev->anon_inode->i_mapping);
+
+ /* Ensure that the CPU's PTE are revoked before we return */
+ wmb();
+
obj->fault_mappable = false;
}

@@ -3269,9 +3279,6 @@ static void i915_gem_object_finish_gtt(struct drm_i915_gem_object *obj)
if ((obj->base.read_domains & I915_GEM_DOMAIN_GTT) == 0)
return;

- /* Wait for any direct GTT access to complete */
- mb();
-
old_read_domains = obj->base.read_domains;
old_write_domain = obj->base.write_domain;
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:14 UTC
Permalink
userptr requires mmu-notifier for full unprivileged support. Most
systems have mmu-notifier support already enabled as a requirement for
virtualisation support, but we should make the option for i915 to take
advantage of mmu-notifiers explicit (and enable by default so that
regular userspace can take advantage of passing client memory to the
GPU.)

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <***@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <***@linux.intel.com>
---
drivers/gpu/drm/i915/Kconfig | 11 +++++++++++
1 file changed, 11 insertions(+)

diff --git a/drivers/gpu/drm/i915/Kconfig b/drivers/gpu/drm/i915/Kconfig
index fcd77b27514d..b979295aab82 100644
--- a/drivers/gpu/drm/i915/Kconfig
+++ b/drivers/gpu/drm/i915/Kconfig
@@ -48,3 +48,14 @@ config DRM_I915_PRELIMINARY_HW_SUPPORT
option changes the default for that module option.

If in doubt, say "N".
+
+config DRM_I915_USERPTR
+ bool "Always enable userptr support"
+ depends on DRM_I915
+ select MMU_NOTIFIER
+ default y
+ help
+ This option selects CONFIG_MMU_NOTIFIER if it isn't already
+ selected to enabled full userptr support.
+
+ If in doubt, say "Y".
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:16 UTC
Permalink
Our driver compiles clean (nowadays thanks to 0day) but for me, at least,
it would be beneficial if the compiler threw an error rather than a
warning when it found a piece of suspect code. (I use this to
compile-check patch series and want to break on the first compiler error
in order to fix the patch.)

v2: Kick off a new "Debugging" submenu for i915.ko

At this point, we applied it to the kernel and promptly kicked it out
again as it broke buildbots (due to a compiler warning on 32bits):

commit 908d759b210effb33d927a8cb6603a16448474e4
Author: Daniel Vetter <***@ffwll.ch>
Date: Tue May 26 07:46:21 2015 +0200

Revert "drm/i915: Force clean compilation with -Werror"

v3: Avoid enabling -Werror for allyesconfig/allmodconfig builds, using
COMPILE_TEST as a suitable proxy suggested by Andrew Morton. (Damien)
Only make the option available for EXPERT to reinforce that the option
should not be casually enabled.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: Jani Nikula <***@intel.com>
Cc: Damien Lespiau <***@intel.com>
Reviewed-by: Daniel Vetter <***@ffwll.ch>
---
drivers/gpu/drm/i915/Kconfig | 6 ++++++
drivers/gpu/drm/i915/Kconfig.debug | 12 ++++++++++++
drivers/gpu/drm/i915/Makefile | 2 ++
3 files changed, 20 insertions(+)
create mode 100644 drivers/gpu/drm/i915/Kconfig.debug

diff --git a/drivers/gpu/drm/i915/Kconfig b/drivers/gpu/drm/i915/Kconfig
index b979295aab82..33e8563c2f99 100644
--- a/drivers/gpu/drm/i915/Kconfig
+++ b/drivers/gpu/drm/i915/Kconfig
@@ -59,3 +59,9 @@ config DRM_I915_USERPTR
selected to enabled full userptr support.

If in doubt, say "Y".
+
+menu "drm/i915 Debugging"
+depends on DRM_I915
+depends on EXPERT
+source drivers/gpu/drm/i915/Kconfig.debug
+endmenu
diff --git a/drivers/gpu/drm/i915/Kconfig.debug b/drivers/gpu/drm/i915/Kconfig.debug
new file mode 100644
index 000000000000..1f10ee228eda
--- /dev/null
+++ b/drivers/gpu/drm/i915/Kconfig.debug
@@ -0,0 +1,12 @@
+config DRM_I915_WERROR
+ bool "Force GCC to throw an error instead of a warning when compiling"
+ default n
+ # As this may inadvertently break the build, only allow the user
+ # to shoot oneself in the foot iff they aim really hard
+ depends on EXPERT
+ # We use the dependency on !COMPILE_TEST to not be enabled in
+ # allmodconfig or allyesconfig configurations
+ depends on !COMPILE_TEST
+ ---help---
+ Add -Werror to the build flags for (and only for) i915.ko.
+ Do not enable this unless you are writing code for the i915.ko module.
diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 0851de07bd13..1e9895b9a546 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -2,6 +2,8 @@
# Makefile for the drm device driver. This driver provides support for the
# Direct Rendering Infrastructure (DRI) in XFree86 4.1.0 and higher.

+subdir-ccflags-$(CONFIG_DRM_I915_WERROR) := -Werror
+
# Please keep these build lists sorted!

# core driver code
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:18 UTC
Permalink
This is principally a little bit of syntatic sugar to hide the
atomic_read()s throughout the code to retrieve the current reset_counter.
It also provides the other utility functions to check the reset state on the
already read reset_counter, so that (in later patches) we can read it once
and do multiple tests rather than risk the value changing between tests.

v2: Be strictly on converting existing i915_reset_in_progress() over to
the more verbose i915_reset_in_progress_or_wedged().

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: Daniel Vetter <***@ffwll.ch>
Reviewed-by: Daniel Vetter <***@ffwll.ch>
---
drivers/gpu/drm/i915/i915_debugfs.c | 4 ++--
drivers/gpu/drm/i915/i915_drv.h | 32 ++++++++++++++++++++++++++++----
drivers/gpu/drm/i915/i915_gem.c | 16 ++++++++--------
drivers/gpu/drm/i915/i915_irq.c | 2 +-
drivers/gpu/drm/i915/intel_display.c | 18 +++++++++++-------
drivers/gpu/drm/i915/intel_lrc.c | 2 +-
drivers/gpu/drm/i915/intel_ringbuffer.c | 4 ++--
7 files changed, 53 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index e3377abc0d4d..932af05b8eec 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -4696,7 +4696,7 @@ i915_wedged_get(void *data, u64 *val)
struct drm_device *dev = data;
struct drm_i915_private *dev_priv = dev->dev_private;

- *val = atomic_read(&dev_priv->gpu_error.reset_counter);
+ *val = i915_reset_counter(&dev_priv->gpu_error);

return 0;
}
@@ -4715,7 +4715,7 @@ i915_wedged_set(void *data, u64 val)
* while it is writing to 'i915_wedged'
*/

- if (i915_reset_in_progress(&dev_priv->gpu_error))
+ if (i915_reset_in_progress_or_wedged(&dev_priv->gpu_error))
return -EAGAIN;

intel_runtime_pm_get(dev_priv);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 1a6168affadd..b274237726de 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2983,20 +2983,44 @@ void i915_gem_retire_requests_ring(struct intel_engine_cs *ring);
int __must_check i915_gem_check_wedge(struct i915_gpu_error *error,
bool interruptible);

+static inline u32 i915_reset_counter(struct i915_gpu_error *error)
+{
+ return atomic_read(&error->reset_counter);
+}
+
+static inline bool __i915_reset_in_progress(u32 reset)
+{
+ return unlikely(reset & I915_RESET_IN_PROGRESS_FLAG);
+}
+
+static inline bool __i915_reset_in_progress_or_wedged(u32 reset)
+{
+ return unlikely(reset & (I915_RESET_IN_PROGRESS_FLAG | I915_WEDGED));
+}
+
+static inline bool __i915_terminally_wedged(u32 reset)
+{
+ return unlikely(reset & I915_WEDGED);
+}
+
static inline bool i915_reset_in_progress(struct i915_gpu_error *error)
{
- return unlikely(atomic_read(&error->reset_counter)
- & (I915_RESET_IN_PROGRESS_FLAG | I915_WEDGED));
+ return __i915_reset_in_progress(i915_reset_counter(error));
+}
+
+static inline bool i915_reset_in_progress_or_wedged(struct i915_gpu_error *error)
+{
+ return __i915_reset_in_progress_or_wedged(i915_reset_counter(error));
}

static inline bool i915_terminally_wedged(struct i915_gpu_error *error)
{
- return atomic_read(&error->reset_counter) & I915_WEDGED;
+ return __i915_terminally_wedged(i915_reset_counter(error));
}

static inline u32 i915_reset_count(struct i915_gpu_error *error)
{
- return ((atomic_read(&error->reset_counter) & ~I915_WEDGED) + 1) / 2;
+ return ((i915_reset_counter(error) & ~I915_WEDGED) + 1) / 2;
}

static inline bool i915_stop_ring_allow_ban(struct drm_i915_private *dev_priv)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 99fd6aa4dd62..78bf980a69bf 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -83,7 +83,7 @@ i915_gem_wait_for_error(struct i915_gpu_error *error)
{
int ret;

-#define EXIT_COND (!i915_reset_in_progress(error) || \
+#define EXIT_COND (!i915_reset_in_progress_or_wedged(error) || \
i915_terminally_wedged(error))
if (EXIT_COND)
return 0;
@@ -1111,7 +1111,7 @@ int
i915_gem_check_wedge(struct i915_gpu_error *error,
bool interruptible)
{
- if (i915_reset_in_progress(error)) {
+ if (i915_reset_in_progress_or_wedged(error)) {
/* Non-interruptible callers can't handle -EAGAIN, hence return
* -EIO unconditionally for these. */
if (!interruptible)
@@ -1295,7 +1295,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req,

/* We need to check whether any gpu reset happened in between
* the caller grabbing the seqno and now ... */
- if (reset_counter != atomic_read(&dev_priv->gpu_error.reset_counter)) {
+ if (reset_counter != i915_reset_counter(&dev_priv->gpu_error)) {
/* ... but upgrade the -EAGAIN to an -EIO if the gpu
* is truely gone. */
ret = i915_gem_check_wedge(&dev_priv->gpu_error, interruptible);
@@ -1473,7 +1473,7 @@ i915_wait_request(struct drm_i915_gem_request *req)
return ret;

ret = __i915_wait_request(req,
- atomic_read(&dev_priv->gpu_error.reset_counter),
+ i915_reset_counter(&dev_priv->gpu_error),
interruptible, NULL, NULL);
if (ret)
return ret;
@@ -1562,7 +1562,7 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
if (ret)
return ret;

- reset_counter = atomic_read(&dev_priv->gpu_error.reset_counter);
+ reset_counter = i915_reset_counter(&dev_priv->gpu_error);

if (readonly) {
struct drm_i915_gem_request *req;
@@ -3115,7 +3115,7 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
}

drm_gem_object_unreference(&obj->base);
- reset_counter = atomic_read(&dev_priv->gpu_error.reset_counter);
+ reset_counter = i915_reset_counter(&dev_priv->gpu_error);

for (i = 0; i < I915_NUM_RINGS; i++) {
if (obj->last_read_req[i] == NULL)
@@ -3160,7 +3160,7 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
if (!i915_semaphore_is_enabled(obj->base.dev)) {
struct drm_i915_private *i915 = to_i915(obj->base.dev);
ret = __i915_wait_request(from_req,
- atomic_read(&i915->gpu_error.reset_counter),
+ i915_reset_counter(&i915->gpu_error),
i915->mm.interruptible,
NULL,
&i915->rps.semaphores);
@@ -4128,7 +4128,7 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)

target = request;
}
- reset_counter = atomic_read(&dev_priv->gpu_error.reset_counter);
+ reset_counter = i915_reset_counter(&dev_priv->gpu_error);
if (target)
i915_gem_request_reference(target);
spin_unlock(&file_priv->mm.lock);
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index f04d799153ca..9a6b0ac54d01 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -2484,7 +2484,7 @@ static void i915_reset_and_wakeup(struct drm_device *dev)
* the reset in-progress bit is only ever set by code outside of this
* work we don't need to worry about any other races.
*/
- if (i915_reset_in_progress(error) && !i915_terminally_wedged(error)) {
+ if (i915_reset_in_progress_or_wedged(error) && !i915_terminally_wedged(error)) {
DRM_DEBUG_DRIVER("resetting chip\n");
kobject_uevent_env(&dev->primary->kdev->kobj, KOBJ_CHANGE,
reset_event);
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 959868c40018..0933bdbaa935 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -3290,10 +3290,12 @@ static bool intel_crtc_has_pending_flip(struct drm_crtc *crtc)
struct drm_device *dev = crtc->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
+ unsigned reset_counter;
bool pending;

- if (i915_reset_in_progress(&dev_priv->gpu_error) ||
- intel_crtc->reset_counter != atomic_read(&dev_priv->gpu_error.reset_counter))
+ reset_counter = i915_reset_counter(&dev_priv->gpu_error);
+ if (intel_crtc->reset_counter != reset_counter ||
+ __i915_reset_in_progress_or_wedged(reset_counter))
return false;

spin_lock_irq(&dev->event_lock);
@@ -11006,9 +11008,11 @@ static bool page_flip_finished(struct intel_crtc *crtc)
{
struct drm_device *dev = crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
+ unsigned reset_counter;

- if (i915_reset_in_progress(&dev_priv->gpu_error) ||
- crtc->reset_counter != atomic_read(&dev_priv->gpu_error.reset_counter))
+ reset_counter = i915_reset_counter(&dev_priv->gpu_error);
+ if (crtc->reset_counter != reset_counter ||
+ __i915_reset_in_progress_or_wedged(reset_counter))
return true;

/*
@@ -11665,7 +11669,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
goto cleanup;

atomic_inc(&intel_crtc->unpin_work_count);
- intel_crtc->reset_counter = atomic_read(&dev_priv->gpu_error.reset_counter);
+ intel_crtc->reset_counter = i915_reset_counter(&dev_priv->gpu_error);

if (INTEL_INFO(dev)->gen >= 5 || IS_G4X(dev))
work->flip_count = I915_READ(PIPE_FLIPCOUNT_G4X(pipe)) + 1;
@@ -13499,10 +13503,10 @@ static int intel_atomic_prepare_commit(struct drm_device *dev,
return ret;

ret = drm_atomic_helper_prepare_planes(dev, state);
- if (!ret && !async && !i915_reset_in_progress(&dev_priv->gpu_error)) {
+ if (!ret && !async && !i915_reset_in_progress_or_wedged(&dev_priv->gpu_error)) {
u32 reset_counter;

- reset_counter = atomic_read(&dev_priv->gpu_error.reset_counter);
+ reset_counter = i915_reset_counter(&dev_priv->gpu_error);
mutex_unlock(&dev->struct_mutex);

for_each_plane_in_state(state, plane, plane_state, i) {
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 7f17ba852b8a..254ce14d790b 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1011,7 +1011,7 @@ void intel_logical_ring_stop(struct intel_engine_cs *ring)
return;

ret = intel_ring_idle(ring);
- if (ret && !i915_reset_in_progress(&to_i915(ring->dev)->gpu_error))
+ if (ret && !i915_reset_in_progress_or_wedged(&to_i915(ring->dev)->gpu_error))
DRM_ERROR("failed to quiesce %s whilst cleaning up: %d\n",
ring->name, ret);

diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 339701d7a9a5..8c6b15ab652b 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -2274,7 +2274,7 @@ int intel_ring_idle(struct intel_engine_cs *ring)

/* Make sure we do not trigger any retires */
return __i915_wait_request(req,
- atomic_read(&to_i915(ring->dev)->gpu_error.reset_counter),
+ i915_reset_counter(&to_i915(ring->dev)->gpu_error),
to_i915(ring->dev)->mm.interruptible,
NULL, NULL);
}
@@ -3068,7 +3068,7 @@ intel_stop_ring_buffer(struct intel_engine_cs *ring)
return;

ret = intel_ring_idle(ring);
- if (ret && !i915_reset_in_progress(&to_i915(ring->dev)->gpu_error))
+ if (ret && !i915_reset_in_progress_or_wedged(&to_i915(ring->dev)->gpu_error))
DRM_ERROR("failed to quiesce %s whilst cleaning up: %d\n",
ring->name, ret);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:17 UTC
Permalink
Currently there is a #define to enable extra BUG_ON for debugging
requests and associated activities. I want to expand its use to cover
all of GEM internals (so that we can saturate the code with asserts).
We can add a Kconfig option to make it easier to enable - with the usual
caveats of not enabling unless explicitly requested.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <***@intel.com>
---
drivers/gpu/drm/i915/Kconfig.debug | 8 ++++++++
drivers/gpu/drm/i915/i915_drv.h | 6 ++++++
drivers/gpu/drm/i915/i915_gem.c | 12 +++++-------
3 files changed, 19 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/Kconfig.debug b/drivers/gpu/drm/i915/Kconfig.debug
index 1f10ee228eda..7fa6b97635e5 100644
--- a/drivers/gpu/drm/i915/Kconfig.debug
+++ b/drivers/gpu/drm/i915/Kconfig.debug
@@ -10,3 +10,11 @@ config DRM_I915_WERROR
---help---
Add -Werror to the build flags for (and only for) i915.ko.
Do not enable this unless you are writing code for the i915.ko module.
+
+config DRM_I915_DEBUG_GEM
+ bool "Insert extra checks into the GEM internals"
+ default n
+ depends on DRM_I915_WERROR
+ ---help---
+ Enable extra sanity checks (including BUGs) that may slow the
+ system down and if hit hang the machine.
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index ec20814adb0c..1a6168affadd 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2271,6 +2271,12 @@ struct drm_i915_gem_request {

};

+#ifdef CONFIG_DRM_I915_DEBUG_GEM
+#define GEM_BUG_ON(expr) BUG_ON(expr)
+#else
+#define GEM_BUG_ON(expr)
+#endif
+
int i915_gem_request_alloc(struct intel_engine_cs *ring,
struct intel_context *ctx,
struct drm_i915_gem_request **req_out);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index fd24877eb0a0..99fd6aa4dd62 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -38,8 +38,6 @@
#include <linux/pci.h>
#include <linux/dma-buf.h>

-#define RQ_BUG_ON(expr)
-
static void i915_gem_object_flush_gtt_write_domain(struct drm_i915_gem_object *obj);
static void i915_gem_object_flush_cpu_write_domain(struct drm_i915_gem_object *obj);
static void
@@ -1520,7 +1518,7 @@ i915_gem_object_wait_rendering(struct drm_i915_gem_object *obj,

i915_gem_object_retire__read(obj, i);
}
- RQ_BUG_ON(obj->active);
+ GEM_BUG_ON(obj->active);
}

return 0;
@@ -2430,8 +2428,8 @@ void i915_vma_move_to_active(struct i915_vma *vma,
static void
i915_gem_object_retire__write(struct drm_i915_gem_object *obj)
{
- RQ_BUG_ON(obj->last_write_req == NULL);
- RQ_BUG_ON(!(obj->active & intel_ring_flag(obj->last_write_req->ring)));
+ GEM_BUG_ON(obj->last_write_req == NULL);
+ GEM_BUG_ON(!(obj->active & intel_ring_flag(obj->last_write_req->ring)));

i915_gem_request_assign(&obj->last_write_req, NULL);
intel_fb_obj_flush(obj, true, ORIGIN_CS);
@@ -2442,8 +2440,8 @@ i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring)
{
struct i915_vma *vma;

- RQ_BUG_ON(obj->last_read_req[ring] == NULL);
- RQ_BUG_ON(!(obj->active & (1 << ring)));
+ GEM_BUG_ON(obj->last_read_req[ring] == NULL);
+ GEM_BUG_ON(!(obj->active & (1 << ring)));

list_del_init(&obj->ring_list[ring]);
i915_gem_request_assign(&obj->last_read_req[ring], NULL);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:22 UTC
Permalink
Now that the reset_counter is stored on the request, we can rearrange
the code to handle reading the counter versus waiting during the atomic
modesetting for readibility (by deleting the hairiest of codes).

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: Daniel Vetter <***@ffwll.ch>
Reviewed-by: Daniel Vetter <***@ffwll.ch>
---
drivers/gpu/drm/i915/intel_display.c | 18 +++++++-----------
1 file changed, 7 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 4f36313f31ac..ee0ec72b16b4 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -13504,9 +13504,9 @@ static int intel_atomic_prepare_commit(struct drm_device *dev,
return ret;

ret = drm_atomic_helper_prepare_planes(dev, state);
- if (!ret && !async && !i915_reset_in_progress_or_wedged(&dev_priv->gpu_error)) {
- mutex_unlock(&dev->struct_mutex);
+ mutex_unlock(&dev->struct_mutex);

+ if (!ret && !async) {
for_each_plane_in_state(state, plane, plane_state, i) {
struct intel_plane_state *intel_plane_state =
to_intel_plane_state(plane_state);
@@ -13520,19 +13520,15 @@ static int intel_atomic_prepare_commit(struct drm_device *dev,
/* Swallow -EIO errors to allow updates during hw lockup. */
if (ret == -EIO)
ret = 0;
-
- if (ret)
+ if (ret) {
+ mutex_lock(&dev->struct_mutex);
+ drm_atomic_helper_cleanup_planes(dev, state);
+ mutex_unlock(&dev->struct_mutex);
break;
+ }
}
-
- if (!ret)
- return 0;
-
- mutex_lock(&dev->struct_mutex);
- drm_atomic_helper_cleanup_planes(dev, state);
}

- mutex_unlock(&dev->struct_mutex);
return ret;
}
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:15 UTC
Permalink
As we add the VMA to the request early, it may be cancelled during
execbuf reservation. This will leave the context object pointing to a
dangling request; i915_wait_request() simply skips the wait and so we
may unbind the object whilst it is still active.

However, if at any point we make a change to the hardware (and equally
importantly our bookkeeping in the driver), we cannot cancel the request
as what has already been written must be submitted. Submitting a partial
request is far easier than trying to unwind the incomplete change.

Unfortunately this patch undoes the excess breadcrumb usage that olr
prevented, e.g. if we interrupt batchbuffer submission then we submit
the requests along with the memory writes and interrupt (even though we
do no real work). Disassociating requests from breadcrumbs (and
semaphores) is a topic for a past/future series, but now much more
important.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: Daniel Vetter <***@ffwll.ch>
Cc: ***@vger.kernel.org
---
drivers/gpu/drm/i915/i915_drv.h | 1 -
drivers/gpu/drm/i915/i915_gem.c | 7 ++-----
drivers/gpu/drm/i915/i915_gem_context.c | 21 +++++++++------------
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 16 +++++-----------
drivers/gpu/drm/i915/intel_display.c | 2 +-
drivers/gpu/drm/i915/intel_lrc.c | 1 -
6 files changed, 17 insertions(+), 31 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 747d2d84a18c..ec20814adb0c 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2813,7 +2813,6 @@ int i915_gem_sw_finish_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
void i915_gem_execbuffer_move_to_active(struct list_head *vmas,
struct drm_i915_gem_request *req);
-void i915_gem_execbuffer_retire_commands(struct i915_execbuffer_params *params);
int i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
struct drm_i915_gem_execbuffer2 *args,
struct list_head *vmas);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 3ab529669448..fd24877eb0a0 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -3384,12 +3384,9 @@ int i915_gpu_idle(struct drm_device *dev)
return ret;

ret = i915_switch_context(req);
- if (ret) {
- i915_gem_request_cancel(req);
- return ret;
- }
-
i915_add_request_no_flush(req);
+ if (ret)
+ return ret;
}

ret = intel_ring_idle(ring);
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index c25083c78ba7..e5e9a8918f19 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -661,7 +661,6 @@ static int do_switch(struct drm_i915_gem_request *req)
struct drm_i915_private *dev_priv = ring->dev->dev_private;
struct intel_context *from = ring->last_context;
u32 hw_flags = 0;
- bool uninitialized = false;
int ret, i;

if (from != NULL && ring == &dev_priv->ring[RCS]) {
@@ -768,6 +767,15 @@ static int do_switch(struct drm_i915_gem_request *req)
to->remap_slice &= ~(1<<i);
}

+ if (!to->legacy_hw_ctx.initialized) {
+ if (ring->init_context) {
+ ret = ring->init_context(req);
+ if (ret)
+ goto unpin_out;
+ }
+ to->legacy_hw_ctx.initialized = true;
+ }
+
/* The backing object for the context is done after switching to the
* *next* context. Therefore we cannot retire the previous context until
* the next context has already started running. In fact, the below code
@@ -791,21 +799,10 @@ static int do_switch(struct drm_i915_gem_request *req)
i915_gem_context_unreference(from);
}

- uninitialized = !to->legacy_hw_ctx.initialized;
- to->legacy_hw_ctx.initialized = true;
-
done:
i915_gem_context_reference(to);
ring->last_context = to;

- if (uninitialized) {
- if (ring->init_context) {
- ret = ring->init_context(req);
- if (ret)
- DRM_ERROR("ring init context: %d\n", ret);
- }
- }
-
return 0;

unpin_out:
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index dccb517361b3..b8186bd061c1 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1136,7 +1136,7 @@ i915_gem_execbuffer_move_to_active(struct list_head *vmas,
}
}

-void
+static void
i915_gem_execbuffer_retire_commands(struct i915_execbuffer_params *params)
{
/* Unconditionally force add_request to emit a full flush. */
@@ -1318,7 +1318,6 @@ i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
trace_i915_gem_ring_dispatch(params->request, params->dispatch_flags);

i915_gem_execbuffer_move_to_active(vmas, params->request);
- i915_gem_execbuffer_retire_commands(params);

return 0;
}
@@ -1607,8 +1606,10 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
goto err_batch_unpin;

ret = i915_gem_request_add_to_client(params->request, file);
- if (ret)
+ if (ret) {
+ i915_gem_request_cancel(params->request);
goto err_batch_unpin;
+ }

/*
* Save assorted stuff away to pass through to *_submission().
@@ -1624,6 +1625,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
params->ctx = ctx;

ret = dev_priv->gt.execbuf_submit(params, args, &eb->vmas);
+ i915_gem_execbuffer_retire_commands(params);

err_batch_unpin:
/*
@@ -1640,14 +1642,6 @@ err:
i915_gem_context_unreference(ctx);
eb_destroy(eb);

- /*
- * If the request was created but not successfully submitted then it
- * must be freed again. If it was submitted then it is being tracked
- * on the active request list and no clean up is required here.
- */
- if (ret && params->request)
- i915_gem_request_cancel(params->request);
-
mutex_unlock(&dev->struct_mutex);

pre_mutex_err:
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index b4cf9ce16155..959868c40018 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11751,7 +11751,7 @@ cleanup_unpin:
intel_unpin_fb_obj(fb, crtc->primary->state);
cleanup_pending:
if (request)
- i915_gem_request_cancel(request);
+ i915_add_request_no_flush(request);
atomic_dec(&intel_crtc->unpin_work_count);
mutex_unlock(&dev->struct_mutex);
cleanup:
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index f7fac5f3b5ce..7f17ba852b8a 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -972,7 +972,6 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
trace_i915_gem_ring_dispatch(params->request, params->dispatch_flags);

i915_gem_execbuffer_move_to_active(vmas, params->request);
- i915_gem_execbuffer_retire_commands(params);

return 0;
}
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:28 UTC
Permalink
In order to ensure seqno/irq coherency, we current read a ring register.
We are not sure quite how it works, only that is does. Experiments show
that e.g. doing a clflush(seqno) instead is not sufficient, but we can
remove the forcewake dance from the mmio access.

v2: Baytrail wants a clflush too.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: Daniel Vetter <***@ffwll.ch>
---
drivers/gpu/drm/i915/intel_ringbuffer.c | 15 +++++++++++++--
1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 99780b674311..a1d43b2c7077 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1490,10 +1490,21 @@ gen6_ring_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency)
{
/* Workaround to force correct ordering between irq and seqno writes on
* ivb (and maybe also on snb) by reading from a CS register (like
- * ACTHD) before reading the status page. */
+ * ACTHD) before reading the status page.
+ *
+ * Note that this effectively effectively stalls the read by the time
+ * it takes to do a memory transaction, which more or less ensures
+ * that the write from the GPU has sufficient time to invalidate
+ * the CPU cacheline. Alternatively we could delay the interrupt from
+ * the CS ring to give the write time to land, but that would incur
+ * a delay after every batch i.e. much more frequent than a delay
+ * when waiting for the interrupt (with the same net latency).
+ */
if (!lazy_coherency) {
struct drm_i915_private *dev_priv = ring->dev->dev_private;
- POSTING_READ(RING_ACTHD(ring->mmio_base));
+ POSTING_READ_FW(RING_ACTHD(ring->mmio_base));
+
+ intel_flush_status_page(ring, I915_GEM_HWS_INDEX);
}

return intel_read_status_page(ring, I915_GEM_HWS_INDEX);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:21 UTC
Permalink
As the request is only valid during the same global reset epoch, we can
record the current reset_counter when constructing the request and reuse
it when waiting upon that request in future. This removes a very hairy
atomic check serialised by the struct_mutex at the time of waiting and
allows us to transfer those waits to a central dispatcher for all
waiters and all requests.

PS: With per-engine resets, we obviously cannot assume a global reset
epoch for the requests - a per-engine epoch makes the most sense. The
challenge then is how to handle checking in the waiter for when to break
the wait, as the fine-grained reset may also want to requeue the
request (i.e. the assumption that just because the epoch changes the
request is completed may be broken - or we just avoid breaking that
assumption with the fine-grained resets).

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: Daniel Vetter <***@ffwll.ch>
Reviewed-by:: Daniel Vetter <***@ffwll.ch>
---
drivers/gpu/drm/i915/i915_drv.h | 2 +-
drivers/gpu/drm/i915/i915_gem.c | 40 +++++++++++----------------------
drivers/gpu/drm/i915/intel_display.c | 7 +-----
drivers/gpu/drm/i915/intel_lrc.c | 7 ------
drivers/gpu/drm/i915/intel_ringbuffer.c | 6 -----
5 files changed, 15 insertions(+), 47 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 60531df3844c..f74bca326b79 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2191,6 +2191,7 @@ struct drm_i915_gem_request {
/** On Which ring this request was generated */
struct drm_i915_private *i915;
struct intel_engine_cs *ring;
+ unsigned reset_counter;

/** GEM sequence number associated with the previous request,
* when the HWS breadcrumb is equal to this the GPU is processing
@@ -3050,7 +3051,6 @@ void __i915_add_request(struct drm_i915_gem_request *req,
#define i915_add_request_no_flush(req) \
__i915_add_request(req, NULL, false)
int __i915_wait_request(struct drm_i915_gem_request *req,
- unsigned reset_counter,
bool interruptible,
s64 *timeout,
struct intel_rps_client *rps);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 2cdd20b3aeaf..56069bdada85 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1212,7 +1212,6 @@ static int __i915_spin_request(struct drm_i915_gem_request *req, int state)
/**
* __i915_wait_request - wait until execution of request has finished
* @req: duh!
- * @reset_counter: reset sequence associated with the given request
* @interruptible: do an interruptible wait (normally yes)
* @timeout: in - how long to wait (NULL forever); out - how much time remaining
*
@@ -1227,7 +1226,6 @@ static int __i915_spin_request(struct drm_i915_gem_request *req, int state)
* errno with remaining time filled in timeout argument.
*/
int __i915_wait_request(struct drm_i915_gem_request *req,
- unsigned reset_counter,
bool interruptible,
s64 *timeout,
struct intel_rps_client *rps)
@@ -1286,7 +1284,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req,

/* We need to check whether any gpu reset happened in between
* the caller grabbing the seqno and now ... */
- if (reset_counter != i915_reset_counter(&dev_priv->gpu_error)) {
+ if (req->reset_counter != i915_reset_counter(&dev_priv->gpu_error)) {
/* ... but upgrade the -EAGAIN to an -EIO if the gpu
* is truely gone. */
ret = i915_gem_check_wedge(&dev_priv->gpu_error, interruptible);
@@ -1459,13 +1457,7 @@ i915_wait_request(struct drm_i915_gem_request *req)

BUG_ON(!mutex_is_locked(&dev->struct_mutex));

- ret = i915_gem_check_wedge(&dev_priv->gpu_error, interruptible);
- if (ret)
- return ret;
-
- ret = __i915_wait_request(req,
- i915_reset_counter(&dev_priv->gpu_error),
- interruptible, NULL, NULL);
+ ret = __i915_wait_request(req, interruptible, NULL, NULL);
if (ret)
return ret;

@@ -1540,7 +1532,6 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
struct drm_device *dev = obj->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_i915_gem_request *requests[I915_NUM_RINGS];
- unsigned reset_counter;
int ret, i, n = 0;

BUG_ON(!mutex_is_locked(&dev->struct_mutex));
@@ -1549,12 +1540,6 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
if (!obj->active)
return 0;

- ret = i915_gem_check_wedge(&dev_priv->gpu_error, true);
- if (ret)
- return ret;
-
- reset_counter = i915_reset_counter(&dev_priv->gpu_error);
-
if (readonly) {
struct drm_i915_gem_request *req;

@@ -1576,9 +1561,9 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
}

mutex_unlock(&dev->struct_mutex);
+ ret = 0;
for (i = 0; ret == 0 && i < n; i++)
- ret = __i915_wait_request(requests[i], reset_counter, true,
- NULL, rps);
+ ret = __i915_wait_request(requests[i], true, NULL, rps);
mutex_lock(&dev->struct_mutex);

for (i = 0; i < n; i++) {
@@ -2692,6 +2677,7 @@ int i915_gem_request_alloc(struct intel_engine_cs *ring,
struct drm_i915_gem_request **req_out)
{
struct drm_i915_private *dev_priv = to_i915(ring->dev);
+ unsigned reset_counter = i915_reset_counter(&dev_priv->gpu_error);
struct drm_i915_gem_request *req;
int ret;

@@ -2700,6 +2686,11 @@ int i915_gem_request_alloc(struct intel_engine_cs *ring,

*req_out = NULL;

+ ret = i915_gem_check_wedge(&dev_priv->gpu_error,
+ dev_priv->mm.interruptible);
+ if (ret)
+ return ret;
+
req = kmem_cache_zalloc(dev_priv->requests, GFP_KERNEL);
if (req == NULL)
return -ENOMEM;
@@ -2711,6 +2702,7 @@ int i915_gem_request_alloc(struct intel_engine_cs *ring,
kref_init(&req->ref);
req->i915 = dev_priv;
req->ring = ring;
+ req->reset_counter = reset_counter;
req->ctx = ctx;
i915_gem_context_reference(req->ctx);

@@ -3068,11 +3060,9 @@ retire:
int
i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_i915_gem_wait *args = data;
struct drm_i915_gem_object *obj;
struct drm_i915_gem_request *req[I915_NUM_RINGS];
- unsigned reset_counter;
int i, n = 0;
int ret;

@@ -3106,7 +3096,6 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
}

drm_gem_object_unreference(&obj->base);
- reset_counter = i915_reset_counter(&dev_priv->gpu_error);

for (i = 0; i < I915_NUM_RINGS; i++) {
if (obj->last_read_req[i] == NULL)
@@ -3119,7 +3108,7 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)

for (i = 0; i < n; i++) {
if (ret == 0)
- ret = __i915_wait_request(req[i], reset_counter, true,
+ ret = __i915_wait_request(req[i], true,
args->timeout_ns > 0 ? &args->timeout_ns : NULL,
to_rps_client(file));
i915_gem_request_unreference__unlocked(req[i]);
@@ -3151,7 +3140,6 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
if (!i915_semaphore_is_enabled(obj->base.dev)) {
struct drm_i915_private *i915 = to_i915(obj->base.dev);
ret = __i915_wait_request(from_req,
- i915_reset_counter(&i915->gpu_error),
i915->mm.interruptible,
NULL,
&i915->rps.semaphores);
@@ -4094,7 +4082,6 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
struct drm_i915_file_private *file_priv = file->driver_priv;
unsigned long recent_enough = jiffies - DRM_I915_THROTTLE_JIFFIES;
struct drm_i915_gem_request *request, *target = NULL;
- unsigned reset_counter;
int ret;

ret = i915_gem_wait_for_error(&dev_priv->gpu_error);
@@ -4119,7 +4106,6 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)

target = request;
}
- reset_counter = i915_reset_counter(&dev_priv->gpu_error);
if (target)
i915_gem_request_reference(target);
spin_unlock(&file_priv->mm.lock);
@@ -4127,7 +4113,7 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
if (target == NULL)
return 0;

- ret = __i915_wait_request(target, reset_counter, true, NULL, NULL);
+ ret = __i915_wait_request(target, true, NULL, NULL);
if (ret == 0)
queue_delayed_work(dev_priv->wq, &dev_priv->mm.retire_work, 0);

diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 183c05bdb220..4f36313f31ac 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11458,7 +11458,6 @@ static void intel_mmio_flip_work_func(struct work_struct *work)

if (mmio_flip->req) {
WARN_ON(__i915_wait_request(mmio_flip->req,
- mmio_flip->crtc->reset_counter,
false, NULL,
&mmio_flip->i915->rps.mmioflips));
i915_gem_request_unreference__unlocked(mmio_flip->req);
@@ -13506,9 +13505,6 @@ static int intel_atomic_prepare_commit(struct drm_device *dev,

ret = drm_atomic_helper_prepare_planes(dev, state);
if (!ret && !async && !i915_reset_in_progress_or_wedged(&dev_priv->gpu_error)) {
- u32 reset_counter;
-
- reset_counter = i915_reset_counter(&dev_priv->gpu_error);
mutex_unlock(&dev->struct_mutex);

for_each_plane_in_state(state, plane, plane_state, i) {
@@ -13519,8 +13515,7 @@ static int intel_atomic_prepare_commit(struct drm_device *dev,
continue;

ret = __i915_wait_request(intel_plane_state->wait_req,
- reset_counter, true,
- NULL, NULL);
+ true, NULL, NULL);

/* Swallow -EIO errors to allow updates during hw lockup. */
if (ret == -EIO)
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 254ce14d790b..3b436eb86ac7 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -848,16 +848,9 @@ static int logical_ring_prepare(struct drm_i915_gem_request *req, int bytes)
*/
int intel_logical_ring_begin(struct drm_i915_gem_request *req, int num_dwords)
{
- struct drm_i915_private *dev_priv;
int ret;

WARN_ON(req == NULL);
- dev_priv = req->ring->dev->dev_private;
-
- ret = i915_gem_check_wedge(&dev_priv->gpu_error,
- dev_priv->mm.interruptible);
- if (ret)
- return ret;

ret = logical_ring_prepare(req, num_dwords * sizeof(uint32_t));
if (ret)
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 8c6b15ab652b..15121f3fd4f7 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -2274,7 +2274,6 @@ int intel_ring_idle(struct intel_engine_cs *ring)

/* Make sure we do not trigger any retires */
return __i915_wait_request(req,
- i915_reset_counter(&to_i915(ring->dev)->gpu_error),
to_i915(ring->dev)->mm.interruptible,
NULL, NULL);
}
@@ -2405,11 +2404,6 @@ int intel_ring_begin(struct drm_i915_gem_request *req,
ring = req->ring;
dev_priv = ring->dev->dev_private;

- ret = i915_gem_check_wedge(&dev_priv->gpu_error,
- dev_priv->mm.interruptible);
- if (ret)
- return ret;
-
ret = __intel_ring_prepare(ring, num_dwords * sizeof(uint32_t));
if (ret)
return ret;
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:31 UTC
Permalink
Now that we have split out the seqno-barrier from the
engine->get_seqno() callback itself, we can move the users of the
seqno-barrier to the required callsites simplifying the common code and
making the required workaround handling much more explicit.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 4 ++--
drivers/gpu/drm/i915/i915_drv.h | 17 ++++++++---------
drivers/gpu/drm/i915/i915_gem.c | 24 ++++++++++++++++--------
drivers/gpu/drm/i915/intel_display.c | 2 +-
drivers/gpu/drm/i915/intel_pm.c | 4 ++--
5 files changed, 29 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 1499e2337e5d..d09e48455dcb 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -601,7 +601,7 @@ static int i915_gem_pageflip_info(struct seq_file *m, void *data)
i915_gem_request_get_seqno(work->flip_queued_req),
dev_priv->next_seqno,
ring->get_seqno(ring),
- i915_gem_request_completed(work->flip_queued_req, true));
+ i915_gem_request_completed(work->flip_queued_req));
} else
seq_printf(m, "Flip not associated with any ring\n");
seq_printf(m, "Flip queued on frame %d, (was ready on frame %d), now %d\n",
@@ -1354,8 +1354,8 @@ static int i915_hangcheck_info(struct seq_file *m, void *unused)
intel_runtime_pm_get(dev_priv);

for_each_ring(ring, dev_priv, i) {
- seqno[i] = ring->get_seqno(ring);
acthd[i] = intel_ring_get_active_head(ring);
+ seqno[i] = ring->get_seqno(ring);
}

i915_get_extra_instdone(dev, instdone);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 9762aa76bb0a..44d46018ee13 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2969,20 +2969,14 @@ i915_seqno_passed(uint32_t seq1, uint32_t seq2)
return (int32_t)(seq1 - seq2) >= 0;
}

-static inline bool i915_gem_request_started(struct drm_i915_gem_request *req,
- bool lazy_coherency)
+static inline bool i915_gem_request_started(struct drm_i915_gem_request *req)
{
- if (!lazy_coherency && req->ring->irq_seqno_barrier)
- req->ring->irq_seqno_barrier(req->ring);
return i915_seqno_passed(req->ring->get_seqno(req->ring),
req->previous_seqno);
}

-static inline bool i915_gem_request_completed(struct drm_i915_gem_request *req,
- bool lazy_coherency)
+static inline bool i915_gem_request_completed(struct drm_i915_gem_request *req)
{
- if (!lazy_coherency && req->ring->irq_seqno_barrier)
- req->ring->irq_seqno_barrier(req->ring);
return i915_seqno_passed(req->ring->get_seqno(req->ring),
req->seqno);
}
@@ -3636,6 +3630,8 @@ static inline void i915_trace_irq_get(struct intel_engine_cs *ring,

static inline bool __i915_request_irq_complete(struct drm_i915_gem_request *req)
{
+ struct intel_engine_cs *engine = req->ring;
+
/* Ensure our read of the seqno is coherent so that we
* do not "miss an interrupt" (i.e. if this is the last
* request and the seqno write from the GPU is not visible
@@ -3647,7 +3643,10 @@ static inline bool __i915_request_irq_complete(struct drm_i915_gem_request *req)
* but it is easier and safer to do it every time the waiter
* is woken.
*/
- if (i915_gem_request_completed(req, false))
+ if (engine->irq_seqno_barrier)
+ engine->irq_seqno_barrier(engine);
+
+ if (i915_gem_request_completed(req))
return true;

/* We need to check whether any gpu reset happened in between
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 4b26529f1f44..d125820c6309 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1171,12 +1171,12 @@ static bool __i915_spin_request(struct drm_i915_gem_request *req,
*/

/* Only spin if we know the GPU is processing this request */
- if (!i915_gem_request_started(req, true))
+ if (!i915_gem_request_started(req))
return false;

timeout = local_clock_us(&cpu) + 5;
do {
- if (i915_gem_request_completed(req, true))
+ if (i915_gem_request_completed(req))
return true;

if (signal_pending_state(state, wait->task))
@@ -1228,7 +1228,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
if (list_empty(&req->list))
return 0;

- if (i915_gem_request_completed(req, true))
+ if (i915_gem_request_completed(req))
return 0;

timeout_remain = MAX_SCHEDULE_TIMEOUT;
@@ -2724,8 +2724,16 @@ i915_gem_find_active_request(struct intel_engine_cs *ring)
{
struct drm_i915_gem_request *request;

+ /* We are called by the error capture and reset at a random
+ * point in time. In particular, note that neither is crucially
+ * ordered with an interrupt. After a hang, the GPU is dead and we
+ * assume that no more writes can happen (we waited long enough for
+ * all writes that were in transaction to be flushed) - adding an
+ * extra delay for a recent interrupt is pointless. Hence, we do
+ * not need an engine->irq_seqno_barrier() before the seqno reads.
+ */
list_for_each_entry(request, &ring->request_list, list) {
- if (i915_gem_request_completed(request, false))
+ if (i915_gem_request_completed(request))
continue;

return request;
@@ -2859,7 +2867,7 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
struct drm_i915_gem_request,
list);

- if (!i915_gem_request_completed(request, true))
+ if (!i915_gem_request_completed(request))
break;

i915_gem_request_retire(request);
@@ -2883,7 +2891,7 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
}

if (unlikely(ring->trace_irq_req &&
- i915_gem_request_completed(ring->trace_irq_req, true))) {
+ i915_gem_request_completed(ring->trace_irq_req))) {
ring->irq_put(ring);
i915_gem_request_assign(&ring->trace_irq_req, NULL);
}
@@ -2995,7 +3003,7 @@ i915_gem_object_flush_active(struct drm_i915_gem_object *obj)
if (list_empty(&req->list))
goto retire;

- if (i915_gem_request_completed(req, true)) {
+ if (i915_gem_request_completed(req)) {
__i915_gem_request_retire__upto(req);
retire:
i915_gem_object_retire__read(obj, i);
@@ -3104,7 +3112,7 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
if (to == from)
return 0;

- if (i915_gem_request_completed(from_req, true))
+ if (i915_gem_request_completed(from_req))
return 0;

if (!i915_semaphore_is_enabled(obj->base.dev)) {
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 7e36f85d3109..de4d4a0d923a 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11523,7 +11523,7 @@ static bool __intel_pageflip_stall_check(struct drm_device *dev,

if (work->flip_ready_vblank == 0) {
if (work->flip_queued_req &&
- !i915_gem_request_completed(work->flip_queued_req, true))
+ !i915_gem_request_completed(work->flip_queued_req))
return false;

work->flip_ready_vblank = drm_crtc_vblank_count(crtc);
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 9df9e9a22f3c..401c3770057d 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -7286,7 +7286,7 @@ static void __intel_rps_boost_work(struct work_struct *work)
struct request_boost *boost = container_of(work, struct request_boost, work);
struct drm_i915_gem_request *req = boost->req;

- if (!i915_gem_request_completed(req, true))
+ if (!i915_gem_request_completed(req))
gen6_rps_boost(to_i915(req->ring->dev), NULL,
req->emitted_jiffies);

@@ -7302,7 +7302,7 @@ void intel_queue_rps_boost_for_request(struct drm_device *dev,
if (req == NULL || INTEL_INFO(dev)->gen < 6)
return;

- if (i915_gem_request_completed(req, true))
+ if (i915_gem_request_completed(req))
return;

boost = kmalloc(sizeof(*boost), GFP_ATOMIC);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:37 UTC
Permalink
We have testcases to ensure that seqno wraparound works fine, so we can
forgo forcing everyone to encounter seqno wraparound during early
uptime. seqno wraparound incurs a full GPU stall so not forcing it
will eliminate one jitter from the early system. Using the testcases, we
have very deterministic testing which given how difficult it would be to
debug an issue (GPU hang) stemming from a wraparound using pure
postmortem analysis I see no value in forcing a wrap during boot.

Advancing the global next_seqno after a GPU reset is equally pointless.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem.c | 16 +---------------
1 file changed, 1 insertion(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index d125820c6309..a0744626a110 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4814,14 +4814,6 @@ i915_gem_init_hw(struct drm_device *dev)
}
}

- /*
- * Increment the next seqno by 0x100 so we have a visible break
- * on re-initialisation
- */
- ret = i915_gem_set_seqno(dev, dev_priv->next_seqno+0x100);
- if (ret)
- goto out;
-
/* Now it is safe to go back round and do everything else: */
for_each_ring(ring, dev_priv, i) {
struct drm_i915_gem_request *req;
@@ -5001,13 +4993,7 @@ i915_gem_load(struct drm_device *dev)
dev_priv->num_fence_regs =
I915_READ(vgtif_reg(avail_rs.fence_num));

- /*
- * Set initial sequence number for requests.
- * Using this number allows the wraparound to happen early,
- * catching any obvious problems.
- */
- dev_priv->next_seqno = ((u32)~0 - 0x1100);
- dev_priv->last_seqno = ((u32)~0 - 0x1101);
+ dev_priv->next_seqno = 1;

/* Initialize fence registers to zero */
INIT_LIST_HEAD(&dev_priv->mm.fence_list);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:43 UTC
Permalink
Since the tests can and do explicitly check debugfs/i915_ring_missed_irqs
for the handling of a "missed interrupt", adding it to the dmesg at INFO
is just noise. When it happens for real, we still class it as an ERROR.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_irq.c | 3 ---
1 file changed, 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index b3942dec7de4..502663f13cd8 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -3083,9 +3083,6 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
if (!test_bit(ring->id, &dev_priv->gpu_error.test_irq_rings))
DRM_ERROR("Hangcheck timer elapsed... %s idle\n",
ring->name);
- else
- DRM_INFO("Fake missed irq on %s\n",
- ring->name);

intel_engine_enable_fake_irq(ring);
}
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:42 UTC
Permalink
Only declare a missed interrupt if we find that the GPU is idle with
waiters and a hangcheck interval has passed in which no new user
interrupts have been raised.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 6 ++++++
drivers/gpu/drm/i915/i915_irq.c | 10 ++++++++--
drivers/gpu/drm/i915/intel_ringbuffer.h | 2 ++
3 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 5a706c700684..567f8db4c70a 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -735,6 +735,9 @@ static void i915_ring_seqno_info(struct seq_file *m,
seq_printf(m, "Current sequence (%s): %x\n",
ring->name, intel_ring_get_seqno(ring));

+ seq_printf(m, "Current user interrupts (%s): %x\n",
+ ring->name, READ_ONCE(ring->user_interrupts));
+
spin_lock(&ring->breadcrumbs.lock);
for (rb = rb_first(&ring->breadcrumbs.waiters);
rb != NULL;
@@ -1372,6 +1375,9 @@ static int i915_hangcheck_info(struct seq_file *m, void *unused)
seq_printf(m, "\tseqno = %x [current %x], waiters? %d\n",
ring->hangcheck.seqno, seqno[i],
intel_engine_has_waiter(ring));
+ seq_printf(m, "\tuser interrupts = %x [current %x]\n",
+ ring->hangcheck.user_interrupts,
+ ring->user_interrupts);
seq_printf(m, "\tACTHD = 0x%08llx [current 0x%08llx]\n",
(long long)ring->hangcheck.acthd,
(long long)acthd[i]);
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index bf48fa63127a..b3942dec7de4 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -997,8 +997,10 @@ static void ironlake_rps_change_irq_handler(struct drm_device *dev)
static void notify_ring(struct intel_engine_cs *ring)
{
ring->irq_posted = true; /* paired with mb() in wake_up_process() */
- if (intel_engine_wakeup(ring))
+ if (intel_engine_wakeup(ring)) {
trace_i915_gem_request_notify(ring);
+ ring->user_interrupts++;
+ }
}

static void vlv_c0_read(struct drm_i915_private *dev_priv,
@@ -3061,12 +3063,14 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
for_each_ring(ring, dev_priv, i) {
u64 acthd;
u32 seqno;
+ unsigned user_interrupts;
bool busy = true;

semaphore_clear_deadlocks(dev_priv);

acthd = intel_ring_get_active_head(ring);
seqno = intel_ring_get_seqno(ring);
+ user_interrupts = READ_ONCE(ring->user_interrupts);

if (ring->hangcheck.seqno == seqno) {
if (ring_idle(ring, seqno)) {
@@ -3074,7 +3078,8 @@ static void i915_hangcheck_elapsed(struct work_struct *work)

if (intel_engine_has_waiter(ring)) {
/* Issue a wake-up to catch stuck h/w. */
- if (!test_and_set_bit(ring->id, &dev_priv->gpu_error.missed_irq_rings)) {
+ if (ring->hangcheck.user_interrupts == user_interrupts &&
+ !test_and_set_bit(ring->id, &dev_priv->gpu_error.missed_irq_rings)) {
if (!test_bit(ring->id, &dev_priv->gpu_error.test_irq_rings))
DRM_ERROR("Hangcheck timer elapsed... %s idle\n",
ring->name);
@@ -3142,6 +3147,7 @@ static void i915_hangcheck_elapsed(struct work_struct *work)

ring->hangcheck.seqno = seqno;
ring->hangcheck.acthd = acthd;
+ ring->hangcheck.user_interrupts = user_interrupts;
busy_count += busy;
}

diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 3364bcebd456..73da75fa47c1 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -90,6 +90,7 @@ struct intel_ring_hangcheck {
u64 acthd;
u64 max_acthd;
u32 seqno;
+ unsigned user_interrupts;
int score;
enum intel_ring_hangcheck_action action;
int deadlock;
@@ -328,6 +329,7 @@ struct intel_engine_cs {
* inspecting request list.
*/
u32 last_submitted_seqno;
+ unsigned user_interrupts;

bool gpu_caches_dirty;
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:34 UTC
Permalink
If we flag the seqno as potentially stale upon receiving an interrupt,
we can use that information to reduce the frequency that we apply the
heavyweight coherent seqno read (i.e. if we wake up a chain of waiters).

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_drv.h | 15 ++++++++++++++-
drivers/gpu/drm/i915/i915_irq.c | 1 +
drivers/gpu/drm/i915/intel_breadcrumbs.c | 8 ++++++++
drivers/gpu/drm/i915/intel_ringbuffer.h | 1 +
4 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index c2ee8efdd928..8940b8d3fa59 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3649,7 +3649,20 @@ static inline bool __i915_request_irq_complete(struct drm_i915_gem_request *req)
* but it is easier and safer to do it every time the waiter
* is woken.
*/
- if (engine->irq_seqno_barrier) {
+ if (engine->irq_seqno_barrier && READ_ONCE(engine->irq_posted)) {
+ /* The ordering of irq_posted versus applying the barrier
+ * is crucial. The clearing of the current irq_posted must
+ * be visible before we perform the barrier operation,
+ * such that if a subsequent interrupt arrives, irq_posted
+ * is reasserted and our task rewoken (which causes us to
+ * do another __i915_request_irq_complete() immediately
+ * and reapply the barrier). Conversely, if the clear
+ * occurs after the barrier, then an interrupt that arrived
+ * whilst we waited on the barrier would not trigger a
+ * barrier on the next pass, and the read may not see the
+ * seqno update.
+ */
+ WRITE_ONCE(engine->irq_posted, false);
engine->irq_seqno_barrier(engine);
if (i915_gem_request_completed(req))
return true;
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 627c7fb6aa9b..738edd7fbf8d 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -1000,6 +1000,7 @@ static void notify_ring(struct intel_engine_cs *ring)
return;

trace_i915_gem_request_notify(ring);
+ ring->irq_posted = true; /* paired with mb() in wake_up_process() */
intel_engine_wakeup(ring);
}

diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
index f66acf820c40..d689bd61534e 100644
--- a/drivers/gpu/drm/i915/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
@@ -43,12 +43,20 @@ static void intel_breadcrumbs_fake_irq(unsigned long data)

static void irq_enable(struct intel_engine_cs *engine)
{
+ /* Enabling the IRQ may miss the generation of the interrupt, but
+ * we still need to force the barrier before reading the seqno,
+ * just in case.
+ */
+ engine->irq_posted = true;
+
WARN_ON(!engine->irq_get(engine));
}

static void irq_disable(struct intel_engine_cs *engine)
{
engine->irq_put(engine);
+
+ engine->irq_posted = false;
}

static bool __intel_breadcrumbs_enable_irq(struct intel_breadcrumbs *b)
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 28ab07b38c05..6cc8e9c5f8d6 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -198,6 +198,7 @@ struct intel_engine_cs {
struct i915_ctx_workarounds wa_ctx;

unsigned irq_refcount; /* protected by dev_priv->irq_lock */
+ bool irq_posted;
u32 irq_enable_mask; /* bitmask to enable ring interrupt */
struct drm_i915_gem_request *trace_irq_req;
bool __must_check (*irq_get)(struct intel_engine_cs *ring);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:41 UTC
Permalink
With only a single callsite for intel_engine_cs->irq_get and ->irq_put,
we can reduce the code size by moving the common preamble into the
caller, and we can also eliminate the reference counting.

For completeness, as we are no longer doing reference counting on irq,
rename the get/put vfunctions to enable/disable respectively.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/intel_breadcrumbs.c | 8 +-
drivers/gpu/drm/i915/intel_lrc.c | 53 ++----
drivers/gpu/drm/i915/intel_ringbuffer.c | 302 ++++++++++---------------------
drivers/gpu/drm/i915/intel_ringbuffer.h | 5 +-
4 files changed, 125 insertions(+), 243 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
index cf9cbcc2d5d7..0ea01bd6811c 100644
--- a/drivers/gpu/drm/i915/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
@@ -51,12 +51,16 @@ static void irq_enable(struct intel_engine_cs *engine)
*/
engine->irq_posted = true;

- WARN_ON(!engine->irq_get(engine));
+ spin_lock_irq(&engine->i915->irq_lock);
+ engine->irq_enable(engine);
+ spin_unlock_irq(&engine->i915->irq_lock);
}

static void irq_disable(struct intel_engine_cs *engine)
{
- engine->irq_put(engine);
+ spin_lock_irq(&engine->i915->irq_lock);
+ engine->irq_disable(engine);
+ spin_unlock_irq(&engine->i915->irq_lock);

engine->irq_posted = false;
}
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 27d91f1ceb2b..b1ede2e9b372 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1640,37 +1640,20 @@ static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
return 0;
}

-static bool gen8_logical_ring_get_irq(struct intel_engine_cs *ring)
+static void gen8_logical_ring_enable_irq(struct intel_engine_cs *ring)
{
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- unsigned long flags;
-
- if (WARN_ON(!intel_irqs_enabled(dev_priv)))
- return false;
-
- spin_lock_irqsave(&dev_priv->irq_lock, flags);
- if (ring->irq_refcount++ == 0) {
- I915_WRITE_IMR(ring, ~(ring->irq_enable_mask | ring->irq_keep_mask));
- POSTING_READ(RING_IMR(ring->mmio_base));
- }
- spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
+ struct drm_i915_private *dev_priv = ring->i915;

- return true;
+ I915_WRITE_IMR(ring, ~(ring->irq_enable_mask | ring->irq_keep_mask));
+ POSTING_READ(RING_IMR(ring->mmio_base));
}

-static void gen8_logical_ring_put_irq(struct intel_engine_cs *ring)
+static void gen8_logical_ring_disable_irq(struct intel_engine_cs *ring)
{
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- unsigned long flags;
+ struct drm_i915_private *dev_priv = ring->i915;

- spin_lock_irqsave(&dev_priv->irq_lock, flags);
- if (--ring->irq_refcount == 0) {
- I915_WRITE_IMR(ring, ~ring->irq_keep_mask);
- POSTING_READ(RING_IMR(ring->mmio_base));
- }
- spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
+ I915_WRITE_IMR(ring, ~ring->irq_keep_mask);
+ POSTING_READ(RING_IMR(ring->mmio_base));
}

static int gen8_emit_flush(struct drm_i915_gem_request *request,
@@ -1993,8 +1976,8 @@ static int logical_render_ring_init(struct drm_device *dev)
ring->irq_seqno_barrier = gen6_seqno_barrier;
ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush_render;
- ring->irq_get = gen8_logical_ring_get_irq;
- ring->irq_put = gen8_logical_ring_put_irq;
+ ring->irq_enable = gen8_logical_ring_enable_irq;
+ ring->irq_disable = gen8_logical_ring_disable_irq;
ring->emit_bb_start = gen8_emit_bb_start;

ring->dev = dev;
@@ -2039,8 +2022,8 @@ static int logical_bsd_ring_init(struct drm_device *dev)
ring->irq_seqno_barrier = gen6_seqno_barrier;
ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush;
- ring->irq_get = gen8_logical_ring_get_irq;
- ring->irq_put = gen8_logical_ring_put_irq;
+ ring->irq_enable = gen8_logical_ring_enable_irq;
+ ring->irq_disable = gen8_logical_ring_disable_irq;
ring->emit_bb_start = gen8_emit_bb_start;

return logical_ring_init(dev, ring);
@@ -2063,8 +2046,8 @@ static int logical_bsd2_ring_init(struct drm_device *dev)
ring->irq_seqno_barrier = gen6_seqno_barrier;
ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush;
- ring->irq_get = gen8_logical_ring_get_irq;
- ring->irq_put = gen8_logical_ring_put_irq;
+ ring->irq_enable = gen8_logical_ring_enable_irq;
+ ring->irq_disable = gen8_logical_ring_disable_irq;
ring->emit_bb_start = gen8_emit_bb_start;

return logical_ring_init(dev, ring);
@@ -2087,8 +2070,8 @@ static int logical_blt_ring_init(struct drm_device *dev)
ring->irq_seqno_barrier = gen6_seqno_barrier;
ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush;
- ring->irq_get = gen8_logical_ring_get_irq;
- ring->irq_put = gen8_logical_ring_put_irq;
+ ring->irq_enable = gen8_logical_ring_enable_irq;
+ ring->irq_disable = gen8_logical_ring_disable_irq;
ring->emit_bb_start = gen8_emit_bb_start;

return logical_ring_init(dev, ring);
@@ -2111,8 +2094,8 @@ static int logical_vebox_ring_init(struct drm_device *dev)
ring->irq_seqno_barrier = gen6_seqno_barrier;
ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush;
- ring->irq_get = gen8_logical_ring_get_irq;
- ring->irq_put = gen8_logical_ring_put_irq;
+ ring->irq_enable = gen8_logical_ring_enable_irq;
+ ring->irq_disable = gen8_logical_ring_disable_irq;
ring->emit_bb_start = gen8_emit_bb_start;

return logical_ring_init(dev, ring);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index c86d0e17d785..5625f56a2db1 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1503,109 +1503,56 @@ gen6_seqno_barrier(struct intel_engine_cs *ring)
intel_flush_status_page(ring, I915_GEM_HWS_INDEX);
}

-static bool
-gen5_ring_get_irq(struct intel_engine_cs *ring)
+static void
+gen5_ring_enable_irq(struct intel_engine_cs *ring)
{
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- unsigned long flags;
-
- if (WARN_ON(!intel_irqs_enabled(dev_priv)))
- return false;
-
- spin_lock_irqsave(&dev_priv->irq_lock, flags);
- if (ring->irq_refcount++ == 0)
- gen5_enable_gt_irq(dev_priv, ring->irq_enable_mask);
- spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
-
- return true;
+ gen5_enable_gt_irq(ring->i915, ring->irq_enable_mask);
}

static void
-gen5_ring_put_irq(struct intel_engine_cs *ring)
+gen5_ring_disable_irq(struct intel_engine_cs *ring)
{
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- unsigned long flags;
-
- spin_lock_irqsave(&dev_priv->irq_lock, flags);
- if (--ring->irq_refcount == 0)
- gen5_disable_gt_irq(dev_priv, ring->irq_enable_mask);
- spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
+ gen5_disable_gt_irq(ring->i915, ring->irq_enable_mask);
}

-static bool
-i9xx_ring_get_irq(struct intel_engine_cs *ring)
+static void
+i9xx_ring_enable_irq(struct intel_engine_cs *ring)
{
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- unsigned long flags;
-
- if (!intel_irqs_enabled(dev_priv))
- return false;
-
- spin_lock_irqsave(&dev_priv->irq_lock, flags);
- if (ring->irq_refcount++ == 0) {
- dev_priv->irq_mask &= ~ring->irq_enable_mask;
- I915_WRITE(IMR, dev_priv->irq_mask);
- POSTING_READ(IMR);
- }
- spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
+ struct drm_i915_private *dev_priv = ring->i915;

- return true;
+ dev_priv->irq_mask &= ~ring->irq_enable_mask;
+ I915_WRITE(IMR, dev_priv->irq_mask);
+ POSTING_READ(IMR);
}

static void
-i9xx_ring_put_irq(struct intel_engine_cs *ring)
+i9xx_ring_disable_irq(struct intel_engine_cs *ring)
{
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- unsigned long flags;
+ struct drm_i915_private *dev_priv = ring->i915;

- spin_lock_irqsave(&dev_priv->irq_lock, flags);
- if (--ring->irq_refcount == 0) {
- dev_priv->irq_mask |= ring->irq_enable_mask;
- I915_WRITE(IMR, dev_priv->irq_mask);
- POSTING_READ(IMR);
- }
- spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
+ dev_priv->irq_mask |= ring->irq_enable_mask;
+ I915_WRITE(IMR, dev_priv->irq_mask);
+ POSTING_READ(IMR);
}

-static bool
-i8xx_ring_get_irq(struct intel_engine_cs *ring)
+static void
+i8xx_ring_enable_irq(struct intel_engine_cs *ring)
{
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- unsigned long flags;
-
- if (!intel_irqs_enabled(dev_priv))
- return false;
-
- spin_lock_irqsave(&dev_priv->irq_lock, flags);
- if (ring->irq_refcount++ == 0) {
- dev_priv->irq_mask &= ~ring->irq_enable_mask;
- I915_WRITE16(IMR, dev_priv->irq_mask);
- POSTING_READ16(IMR);
- }
- spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
+ struct drm_i915_private *dev_priv = ring->i915;

- return true;
+ dev_priv->irq_mask &= ~ring->irq_enable_mask;
+ I915_WRITE16(IMR, dev_priv->irq_mask);
+ POSTING_READ16(IMR);
}

static void
-i8xx_ring_put_irq(struct intel_engine_cs *ring)
+i8xx_ring_disable_irq(struct intel_engine_cs *ring)
{
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- unsigned long flags;
+ struct drm_i915_private *dev_priv = ring->i915;

- spin_lock_irqsave(&dev_priv->irq_lock, flags);
- if (--ring->irq_refcount == 0) {
- dev_priv->irq_mask |= ring->irq_enable_mask;
- I915_WRITE16(IMR, dev_priv->irq_mask);
- POSTING_READ16(IMR);
- }
- spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
+ dev_priv->irq_mask |= ring->irq_enable_mask;
+ I915_WRITE16(IMR, dev_priv->irq_mask);
+ POSTING_READ16(IMR);
}

static int
@@ -1645,128 +1592,77 @@ i9xx_add_request(struct drm_i915_gem_request *req)
return 0;
}

-static bool
-gen6_ring_get_irq(struct intel_engine_cs *ring)
+static void
+gen6_ring_enable_irq(struct intel_engine_cs *ring)
{
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- unsigned long flags;
-
- if (WARN_ON(!intel_irqs_enabled(dev_priv)))
- return false;
-
- spin_lock_irqsave(&dev_priv->irq_lock, flags);
- if (ring->irq_refcount++ == 0) {
- if (HAS_L3_DPF(dev) && ring->id == RCS)
- I915_WRITE_IMR(ring,
- ~(ring->irq_enable_mask |
- GT_PARITY_ERROR(dev)));
- else
- I915_WRITE_IMR(ring, ~ring->irq_enable_mask);
- gen5_enable_gt_irq(dev_priv, ring->irq_enable_mask);
- }
- spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
+ struct drm_i915_private *dev_priv = ring->i915;

- return true;
+ if (HAS_L3_DPF(dev_priv) && ring->id == RCS)
+ I915_WRITE_IMR(ring,
+ ~(ring->irq_enable_mask |
+ GT_PARITY_ERROR(dev_priv)));
+ else
+ I915_WRITE_IMR(ring, ~ring->irq_enable_mask);
+ gen5_enable_gt_irq(dev_priv, ring->irq_enable_mask);
}

static void
-gen6_ring_put_irq(struct intel_engine_cs *ring)
+gen6_ring_disable_irq(struct intel_engine_cs *ring)
{
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- unsigned long flags;
+ struct drm_i915_private *dev_priv = ring->i915;

- spin_lock_irqsave(&dev_priv->irq_lock, flags);
- if (--ring->irq_refcount == 0) {
- if (HAS_L3_DPF(dev) && ring->id == RCS)
- I915_WRITE_IMR(ring, ~GT_PARITY_ERROR(dev));
- else
- I915_WRITE_IMR(ring, ~0);
- gen5_disable_gt_irq(dev_priv, ring->irq_enable_mask);
- }
- spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
+ if (HAS_L3_DPF(dev_priv) && ring->id == RCS)
+ I915_WRITE_IMR(ring, ~GT_PARITY_ERROR(dev_priv));
+ else
+ I915_WRITE_IMR(ring, ~0);
+ gen5_disable_gt_irq(dev_priv, ring->irq_enable_mask);
}

-static bool
-hsw_vebox_get_irq(struct intel_engine_cs *ring)
+static void
+hsw_vebox_enable_irq(struct intel_engine_cs *ring)
{
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- unsigned long flags;
-
- if (WARN_ON(!intel_irqs_enabled(dev_priv)))
- return false;
-
- spin_lock_irqsave(&dev_priv->irq_lock, flags);
- if (ring->irq_refcount++ == 0) {
- I915_WRITE_IMR(ring, ~ring->irq_enable_mask);
- gen6_enable_pm_irq(dev_priv, ring->irq_enable_mask);
- }
- spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
+ struct drm_i915_private *dev_priv = ring->i915;

- return true;
+ I915_WRITE_IMR(ring, ~ring->irq_enable_mask);
+ gen6_enable_pm_irq(dev_priv, ring->irq_enable_mask);
}

static void
-hsw_vebox_put_irq(struct intel_engine_cs *ring)
+hsw_vebox_disable_irq(struct intel_engine_cs *ring)
{
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- unsigned long flags;
+ struct drm_i915_private *dev_priv = ring->i915;

- spin_lock_irqsave(&dev_priv->irq_lock, flags);
- if (--ring->irq_refcount == 0) {
- I915_WRITE_IMR(ring, ~0);
- gen6_disable_pm_irq(dev_priv, ring->irq_enable_mask);
- }
- spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
+ I915_WRITE_IMR(ring, ~0);
+ gen6_disable_pm_irq(dev_priv, ring->irq_enable_mask);
}

-static bool
-gen8_ring_get_irq(struct intel_engine_cs *ring)
+static void
+gen8_ring_enable_irq(struct intel_engine_cs *ring)
{
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- unsigned long flags;
-
- if (WARN_ON(!intel_irqs_enabled(dev_priv)))
- return false;
+ struct drm_i915_private *dev_priv = ring->i915;

- spin_lock_irqsave(&dev_priv->irq_lock, flags);
- if (ring->irq_refcount++ == 0) {
- if (HAS_L3_DPF(dev) && ring->id == RCS) {
- I915_WRITE_IMR(ring,
- ~(ring->irq_enable_mask |
- GT_RENDER_L3_PARITY_ERROR_INTERRUPT));
- } else {
- I915_WRITE_IMR(ring, ~ring->irq_enable_mask);
- }
- POSTING_READ(RING_IMR(ring->mmio_base));
+ if (HAS_L3_DPF(dev_priv) && ring->id == RCS) {
+ I915_WRITE_IMR(ring,
+ ~(ring->irq_enable_mask |
+ GT_RENDER_L3_PARITY_ERROR_INTERRUPT));
+ } else {
+ I915_WRITE_IMR(ring, ~ring->irq_enable_mask);
}
- spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
-
- return true;
+ POSTING_READ(RING_IMR(ring->mmio_base));
}

static void
-gen8_ring_put_irq(struct intel_engine_cs *ring)
+gen8_ring_disable_irq(struct intel_engine_cs *ring)
{
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- unsigned long flags;
+ struct drm_i915_private *dev_priv = ring->i915;

- spin_lock_irqsave(&dev_priv->irq_lock, flags);
- if (--ring->irq_refcount == 0) {
- if (HAS_L3_DPF(dev) && ring->id == RCS) {
- I915_WRITE_IMR(ring,
- ~GT_RENDER_L3_PARITY_ERROR_INTERRUPT);
- } else {
- I915_WRITE_IMR(ring, ~0);
- }
- POSTING_READ(RING_IMR(ring->mmio_base));
+ if (HAS_L3_DPF(dev_priv) && ring->id == RCS) {
+ I915_WRITE_IMR(ring,
+ ~GT_RENDER_L3_PARITY_ERROR_INTERRUPT);
+ } else {
+ I915_WRITE_IMR(ring, ~0);
}
- spin_unlock_irqrestore(&dev_priv->irq_lock, flags);
+ POSTING_READ(RING_IMR(ring->mmio_base));
}

static int
@@ -2667,8 +2563,8 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
ring->init_context = intel_rcs_ctx_init;
ring->add_request = gen6_add_request;
ring->flush = gen8_render_ring_flush;
- ring->irq_get = gen8_ring_get_irq;
- ring->irq_put = gen8_ring_put_irq;
+ ring->irq_enable = gen8_ring_enable_irq;
+ ring->irq_disable = gen8_ring_disable_irq;
ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
ring->irq_seqno_barrier = gen6_seqno_barrier;
if (i915_semaphore_is_enabled(dev)) {
@@ -2683,8 +2579,8 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
ring->flush = gen7_render_ring_flush;
if (INTEL_INFO(dev)->gen == 6)
ring->flush = gen6_render_ring_flush;
- ring->irq_get = gen6_ring_get_irq;
- ring->irq_put = gen6_ring_put_irq;
+ ring->irq_enable = gen6_ring_enable_irq;
+ ring->irq_disable = gen6_ring_disable_irq;
ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
ring->irq_seqno_barrier = gen6_seqno_barrier;
if (i915_semaphore_is_enabled(dev)) {
@@ -2711,8 +2607,8 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
} else if (IS_GEN5(dev)) {
ring->add_request = pc_render_add_request;
ring->flush = gen4_render_ring_flush;
- ring->irq_get = gen5_ring_get_irq;
- ring->irq_put = gen5_ring_put_irq;
+ ring->irq_enable = gen5_ring_enable_irq;
+ ring->irq_disable = gen5_ring_disable_irq;
ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT |
GT_RENDER_PIPECTL_NOTIFY_INTERRUPT;
} else {
@@ -2722,11 +2618,11 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
else
ring->flush = gen4_render_ring_flush;
if (IS_GEN2(dev)) {
- ring->irq_get = i8xx_ring_get_irq;
- ring->irq_put = i8xx_ring_put_irq;
+ ring->irq_enable = i8xx_ring_enable_irq;
+ ring->irq_disable = i8xx_ring_disable_irq;
} else {
- ring->irq_get = i9xx_ring_get_irq;
- ring->irq_put = i9xx_ring_put_irq;
+ ring->irq_enable = i9xx_ring_enable_irq;
+ ring->irq_disable = i9xx_ring_disable_irq;
}
ring->irq_enable_mask = I915_USER_INTERRUPT;
}
@@ -2799,8 +2695,8 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
if (INTEL_INFO(dev)->gen >= 8) {
ring->irq_enable_mask =
GT_RENDER_USER_INTERRUPT << GEN8_VCS1_IRQ_SHIFT;
- ring->irq_get = gen8_ring_get_irq;
- ring->irq_put = gen8_ring_put_irq;
+ ring->irq_enable = gen8_ring_enable_irq;
+ ring->irq_disable = gen8_ring_disable_irq;
ring->dispatch_execbuffer =
gen8_ring_dispatch_execbuffer;
if (i915_semaphore_is_enabled(dev)) {
@@ -2810,8 +2706,8 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
}
} else {
ring->irq_enable_mask = GT_BSD_USER_INTERRUPT;
- ring->irq_get = gen6_ring_get_irq;
- ring->irq_put = gen6_ring_put_irq;
+ ring->irq_enable = gen6_ring_enable_irq;
+ ring->irq_disable = gen6_ring_disable_irq;
ring->dispatch_execbuffer =
gen6_ring_dispatch_execbuffer;
if (i915_semaphore_is_enabled(dev)) {
@@ -2835,12 +2731,12 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
ring->add_request = i9xx_add_request;
if (IS_GEN5(dev)) {
ring->irq_enable_mask = ILK_BSD_USER_INTERRUPT;
- ring->irq_get = gen5_ring_get_irq;
- ring->irq_put = gen5_ring_put_irq;
+ ring->irq_enable = gen5_ring_enable_irq;
+ ring->irq_disable = gen5_ring_disable_irq;
} else {
ring->irq_enable_mask = I915_BSD_USER_INTERRUPT;
- ring->irq_get = i9xx_ring_get_irq;
- ring->irq_put = i9xx_ring_put_irq;
+ ring->irq_enable = i9xx_ring_enable_irq;
+ ring->irq_disable = i9xx_ring_disable_irq;
}
ring->dispatch_execbuffer = i965_dispatch_execbuffer;
}
@@ -2867,8 +2763,8 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev)
ring->irq_seqno_barrier = gen6_seqno_barrier;
ring->irq_enable_mask =
GT_RENDER_USER_INTERRUPT << GEN8_VCS2_IRQ_SHIFT;
- ring->irq_get = gen8_ring_get_irq;
- ring->irq_put = gen8_ring_put_irq;
+ ring->irq_enable = gen8_ring_enable_irq;
+ ring->irq_disable = gen8_ring_disable_irq;
ring->dispatch_execbuffer =
gen8_ring_dispatch_execbuffer;
if (i915_semaphore_is_enabled(dev)) {
@@ -2897,8 +2793,8 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
if (INTEL_INFO(dev)->gen >= 8) {
ring->irq_enable_mask =
GT_RENDER_USER_INTERRUPT << GEN8_BCS_IRQ_SHIFT;
- ring->irq_get = gen8_ring_get_irq;
- ring->irq_put = gen8_ring_put_irq;
+ ring->irq_enable = gen8_ring_enable_irq;
+ ring->irq_disable = gen8_ring_disable_irq;
ring->dispatch_execbuffer = gen8_ring_dispatch_execbuffer;
if (i915_semaphore_is_enabled(dev)) {
ring->semaphore.sync_to = gen8_ring_sync;
@@ -2907,8 +2803,8 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
}
} else {
ring->irq_enable_mask = GT_BLT_USER_INTERRUPT;
- ring->irq_get = gen6_ring_get_irq;
- ring->irq_put = gen6_ring_put_irq;
+ ring->irq_enable = gen6_ring_enable_irq;
+ ring->irq_disable = gen6_ring_disable_irq;
ring->dispatch_execbuffer = gen6_ring_dispatch_execbuffer;
if (i915_semaphore_is_enabled(dev)) {
ring->semaphore.signal = gen6_signal;
@@ -2954,8 +2850,8 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
if (INTEL_INFO(dev)->gen >= 8) {
ring->irq_enable_mask =
GT_RENDER_USER_INTERRUPT << GEN8_VECS_IRQ_SHIFT;
- ring->irq_get = gen8_ring_get_irq;
- ring->irq_put = gen8_ring_put_irq;
+ ring->irq_enable = gen8_ring_enable_irq;
+ ring->irq_disable = gen8_ring_disable_irq;
ring->dispatch_execbuffer = gen8_ring_dispatch_execbuffer;
if (i915_semaphore_is_enabled(dev)) {
ring->semaphore.sync_to = gen8_ring_sync;
@@ -2964,8 +2860,8 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
}
} else {
ring->irq_enable_mask = PM_VEBOX_USER_INTERRUPT;
- ring->irq_get = hsw_vebox_get_irq;
- ring->irq_put = hsw_vebox_put_irq;
+ ring->irq_enable = hsw_vebox_enable_irq;
+ ring->irq_disable = hsw_vebox_disable_irq;
ring->dispatch_execbuffer = gen6_ring_dispatch_execbuffer;
if (i915_semaphore_is_enabled(dev)) {
ring->semaphore.sync_to = gen6_ring_sync;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index ba81052999fa..3364bcebd456 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -202,11 +202,10 @@ struct intel_engine_cs {
struct intel_hw_status_page status_page;
struct i915_ctx_workarounds wa_ctx;

- unsigned irq_refcount; /* protected by dev_priv->irq_lock */
bool irq_posted;
u32 irq_enable_mask; /* bitmask to enable ring interrupt */
- bool __must_check (*irq_get)(struct intel_engine_cs *ring);
- void (*irq_put)(struct intel_engine_cs *ring);
+ void (*irq_enable)(struct intel_engine_cs *ring);
+ void (*irq_disable)(struct intel_engine_cs *ring);

int (*init_hw)(struct intel_engine_cs *ring);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:23 UTC
Permalink
Reporting -EIO from i915_wait_request() has proven very troublematic
over the years, with numerous hard-to-reproduce bugs cropping up in the
corner case of where a reset occurs and the code wasn't expecting such
an error.

If the we reset the GPU or have detected a hang and wish to reset the
GPU, the request is forcibly complete and the wait broken. Currently, we
report either -EAGAIN or -EIO in order for the caller to retreat and
restart the wait (if appropriate) after dropping and then reacquiring
the struct_mutex (essential to allow the GPU reset to proceed). However,
if we take the view that the request is complete (no further work will
be done on it by the GPU because it is dead and soon to be reset), then
we can proceed with the task at hand and then drop the struct_mutex
allowing the reset to occur. This transfers the burden of checking
whether it is safe to proceed to the caller, which in all but one
instance it is safe - completely eliminating the source of all spurious
-EIO.

Of note, we only have two API entry points where we expect that
userspace can observe an EIO. First is when submitting an execbuf, if
the GPU is terminally wedged, then the operation cannot succeed and an
-EIO is reported. Secondly, existing userspace uses the throttle ioctl
to detect an already wedged GPU before starting using HW acceleration
(or to confirm that the GPU is wedged after an error condition). So if
the GPU is wedged when the user calls throttle, also report -EIO.

v2: Split more carefully the change to i915_wait_request() and assorted
ABI from the reset handling.
v3: Add a couple of WARN_ON(EIO) to the interruptible modesetting code
so that we don't start to leak EIO there in future (and break our hang
resistant modesetting).

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: Daniel Vetter <***@ffwll.ch>
Reviewed-by: Daniel Vetter <***@ffwll.ch>
---
drivers/gpu/drm/i915/i915_drv.h | 2 --
drivers/gpu/drm/i915/i915_gem.c | 44 ++++++++++++++++-----------------
drivers/gpu/drm/i915/i915_gem_userptr.c | 6 ++---
drivers/gpu/drm/i915/intel_display.c | 13 +++++-----
drivers/gpu/drm/i915/intel_lrc.c | 2 +-
drivers/gpu/drm/i915/intel_ringbuffer.c | 2 +-
6 files changed, 32 insertions(+), 37 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index f74bca326b79..bbdb056d2a8e 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2978,8 +2978,6 @@ i915_gem_find_active_request(struct intel_engine_cs *ring);

bool i915_gem_retire_requests(struct drm_device *dev);
void i915_gem_retire_requests_ring(struct intel_engine_cs *ring);
-int __must_check i915_gem_check_wedge(struct i915_gpu_error *error,
- bool interruptible);

static inline u32 i915_reset_counter(struct i915_gpu_error *error)
{
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 56069bdada85..f570990f03e0 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -206,11 +206,10 @@ i915_gem_object_put_pages_phys(struct drm_i915_gem_object *obj)
BUG_ON(obj->madv == __I915_MADV_PURGED);

ret = i915_gem_object_set_to_cpu_domain(obj, true);
- if (ret) {
+ if (WARN_ON(ret)) {
/* In the event of a disaster, abandon all caches and
* hope for the best.
*/
- WARN_ON(ret != -EIO);
obj->base.read_domains = obj->base.write_domain = I915_GEM_DOMAIN_CPU;
}

@@ -1104,15 +1103,13 @@ put_rpm:
return ret;
}

-int
-i915_gem_check_wedge(struct i915_gpu_error *error,
- bool interruptible)
+static int
+i915_gem_check_wedge(unsigned reset_counter, bool interruptible)
{
- if (i915_reset_in_progress_or_wedged(error)) {
- /* Recovery complete, but the reset failed ... */
- if (i915_terminally_wedged(error))
- return -EIO;
+ if (__i915_terminally_wedged(reset_counter))
+ return -EIO;

+ if (__i915_reset_in_progress(reset_counter)) {
/* Non-interruptible callers can't handle -EAGAIN, hence return
* -EIO unconditionally for these. */
if (!interruptible)
@@ -1283,13 +1280,14 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
prepare_to_wait(&ring->irq_queue, &wait, state);

/* We need to check whether any gpu reset happened in between
- * the caller grabbing the seqno and now ... */
+ * the request being submitted and now. If a reset has occurred,
+ * the request is effectively complete (we either are in the
+ * process of or have discarded the rendering and completely
+ * reset the GPU. The results of the request are lost and we
+ * are free to continue on with the original operation.
+ */
if (req->reset_counter != i915_reset_counter(&dev_priv->gpu_error)) {
- /* ... but upgrade the -EAGAIN to an -EIO if the gpu
- * is truely gone. */
- ret = i915_gem_check_wedge(&dev_priv->gpu_error, interruptible);
- if (ret == 0)
- ret = -EAGAIN;
+ ret = 0;
break;
}

@@ -2162,11 +2160,10 @@ i915_gem_object_put_pages_gtt(struct drm_i915_gem_object *obj)
BUG_ON(obj->madv == __I915_MADV_PURGED);

ret = i915_gem_object_set_to_cpu_domain(obj, true);
- if (ret) {
+ if (WARN_ON(ret)) {
/* In the event of a disaster, abandon all caches and
* hope for the best.
*/
- WARN_ON(ret != -EIO);
i915_gem_clflush_object(obj, true);
obj->base.read_domains = obj->base.write_domain = I915_GEM_DOMAIN_CPU;
}
@@ -2686,8 +2683,11 @@ int i915_gem_request_alloc(struct intel_engine_cs *ring,

*req_out = NULL;

- ret = i915_gem_check_wedge(&dev_priv->gpu_error,
- dev_priv->mm.interruptible);
+ /* ABI: Before userspace accesses the GPU (e.g. execbuffer), report
+ * EIO if the GPU is already wedged, or EAGAIN to drop the struct_mutex
+ * and restart.
+ */
+ ret = i915_gem_check_wedge(reset_counter, dev_priv->mm.interruptible);
if (ret)
return ret;

@@ -4088,9 +4088,9 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
if (ret)
return ret;

- ret = i915_gem_check_wedge(&dev_priv->gpu_error, false);
- if (ret)
- return ret;
+ /* ABI: return -EIO if already wedged */
+ if (i915_terminally_wedged(&dev_priv->gpu_error))
+ return -EIO;

spin_lock(&file_priv->mm.lock);
list_for_each_entry(request, &file_priv->mm.request_list, client_list) {
diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index 19fb0bddc1cd..1a5f89dba4af 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -81,10 +81,8 @@ static void __cancel_userptr__worker(struct work_struct *work)
was_interruptible = dev_priv->mm.interruptible;
dev_priv->mm.interruptible = false;

- list_for_each_entry_safe(vma, tmp, &obj->vma_list, vma_link) {
- int ret = i915_vma_unbind(vma);
- WARN_ON(ret && ret != -EIO);
- }
+ list_for_each_entry_safe(vma, tmp, &obj->vma_list, vma_link)
+ WARN_ON(i915_vma_unbind(vma));
WARN_ON(i915_gem_object_put_pages(obj));

dev_priv->mm.interruptible = was_interruptible;
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index ee0ec72b16b4..7e36f85d3109 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -13516,11 +13516,9 @@ static int intel_atomic_prepare_commit(struct drm_device *dev,

ret = __i915_wait_request(intel_plane_state->wait_req,
true, NULL, NULL);
-
- /* Swallow -EIO errors to allow updates during hw lockup. */
- if (ret == -EIO)
- ret = 0;
if (ret) {
+ /* Any hang should be swallowed by the wait */
+ WARN_ON(ret == -EIO);
mutex_lock(&dev->struct_mutex);
drm_atomic_helper_cleanup_planes(dev, state);
mutex_unlock(&dev->struct_mutex);
@@ -13889,10 +13887,11 @@ intel_prepare_plane_fb(struct drm_plane *plane,
*/
if (needs_modeset(crtc_state))
ret = i915_gem_object_wait_rendering(old_obj, true);
-
- /* Swallow -EIO errors to allow updates during hw lockup. */
- if (ret && ret != -EIO)
+ if (ret) {
+ /* GPU hangs should have been swallowed by the wait */
+ WARN_ON(ret == -EIO);
return ret;
+ }
}

/* For framebuffer backed by dmabuf, wait for fence */
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 3b436eb86ac7..32644338e6f8 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1004,7 +1004,7 @@ void intel_logical_ring_stop(struct intel_engine_cs *ring)
return;

ret = intel_ring_idle(ring);
- if (ret && !i915_reset_in_progress_or_wedged(&to_i915(ring->dev)->gpu_error))
+ if (ret)
DRM_ERROR("failed to quiesce %s whilst cleaning up: %d\n",
ring->name, ret);

diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 15121f3fd4f7..99780b674311 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -3062,7 +3062,7 @@ intel_stop_ring_buffer(struct intel_engine_cs *ring)
return;

ret = intel_ring_idle(ring);
- if (ret && !i915_reset_in_progress_or_wedged(&to_i915(ring->dev)->gpu_error))
+ if (ret)
DRM_ERROR("failed to quiesce %s whilst cleaning up: %d\n",
ring->name, ret);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:24 UTC
Permalink
If we do not have lowlevel support for reseting the GPU, or if the user
has explicitly disabled reseting the device, the failure is expected.
Since it is an expected failure, we should be using a lower priority
message than *ERROR*, perhaps NOTICE. In the absence of DRM_NOTICE, just
emit the expected failure as a DEBUG message.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: Daniel Vetter <***@ffwll.ch>
Reviewed-by: Daniel Vetter <***@ffwll.ch>
---
drivers/gpu/drm/i915/i915_drv.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 2f03379cdb4b..5160f1414de4 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -910,7 +910,10 @@ int i915_reset(struct drm_device *dev)
pr_notice("drm/i915: Resetting chip after gpu hang\n");

if (ret) {
- DRM_ERROR("Failed to reset chip: %i\n", ret);
+ if (ret != -ENODEV)
+ DRM_ERROR("Failed to reset chip: %i\n", ret);
+ else
+ DRM_DEBUG_DRIVER("GPU reset disabled\n");
goto error;
}
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:47 UTC
Permalink
Ideally, we want to automagically have the GPU respond to the
instantaneous load by reclocking itself. However, reclocking occurs
relatively slowly, and to the client waiting for a result from the GPU,
too late. To compensate and reduce the client latency, we allow the
first wait from a client to boost the GPU clocks to maximum. This
overcomes the lag in autoreclocking, at the expense of forcing the GPU
clocks too high. So to offset the excessive power usage, we currently
allow a client to only boost the clocks once before we detect the GPU
is idle again. This works reasonably for say the first frame in a
benchmark, but for many more synchronous workloads (like OpenCL) we find
the GPU clocks remain too low. By noting a wait which would idle the GPU
(i.e. we just waited upon the last known request), we can give that
client the idle boost credit (for their next wait) without the 100ms
delay required for us to detect the GPU idle state. The intention is to
boost clients that are stalling in the process of feeding the GPU more
work (and who in doing so let the GPU idle), without granting boost
credits to clients that are throttling themselves (such as compositors).

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: "Zou, Nanhai" <***@intel.com>
Cc: Jesse Barnes <***@virtuousgeek.org>
Reviewed-by: Jesse Barnes <***@virtuousgeek.org>
---
drivers/gpu/drm/i915/i915_gem.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index e9f5ca7ea835..3fea582768e9 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1314,6 +1314,22 @@ complete:
*timeout = 0;
}

+ if (ret == 0 && rps && req->seqno == req->ring->last_submitted_seqno) {
+ /* The GPU is now idle and this client has stalled.
+ * Since no other client has submitted a request in the
+ * meantime, assume that this client is the only one
+ * supplying work to the GPU but is unable to keep that
+ * work supplied because it is waiting. Since the GPU is
+ * then never kept fully busy, RPS autoclocking will
+ * keep the clocks relatively low, causing further delays.
+ * Compensate by giving the synchronous client credit for
+ * a waitboost next time.
+ */
+ spin_lock(&req->i915->rps.client_lock);
+ list_del_init(&rps->link);
+ spin_unlock(&req->i915->rps.client_lock);
+ }
+
return ret;
}
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:52 UTC
Permalink
igt likes to inject GPU hangs into its command streams. However, as we
expect these hangs, we don't actually want them recorded in the dmesg
output or stored in the i915_error_state (usually). To accomodate this
allow userspace to set a flag on the context that any hang emanating
from that context will not be recorded. We still do the error capture
(otherwise how do we find the guilty context and know its intent?) as
part of the reason for random GPU hang injection is to exercise the race
conditions between the error capture and normal execution.

v2: Split out the request->ringbuf error capture changes.
v3: Move the flag defines next to the intel_context->flags definition

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Acked-by: Daniel Vetter <***@ffwll.ch>
Reviewed-by: Dave Gordon <***@intel.com>
---
drivers/gpu/drm/i915/i915_drv.h | 7 +++++--
drivers/gpu/drm/i915/i915_gem_context.c | 13 +++++++++++++
drivers/gpu/drm/i915/i915_gpu_error.c | 14 +++++++++-----
include/uapi/drm/i915_drm.h | 1 +
4 files changed, 28 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index c3b795f1566b..57e450e25ad6 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -486,6 +486,7 @@ struct drm_i915_error_state {
struct timeval time;

char error_msg[128];
+ bool simulated;
int iommu;
u32 reset_count;
u32 suspend_count;
@@ -842,7 +843,6 @@ struct i915_ctx_hang_stats {
/* This must match up with the value previously used for execbuf2.rsvd1. */
#define DEFAULT_CONTEXT_HANDLE 0

-#define CONTEXT_NO_ZEROMAP (1<<0)
/**
* struct intel_context - as the name implies, represents a context.
* @ref: reference count.
@@ -867,11 +867,14 @@ struct intel_context {
int user_handle;
uint8_t remap_slice;
struct drm_i915_private *i915;
- int flags;
struct drm_i915_file_private *file_priv;
struct i915_ctx_hang_stats hang_stats;
struct i915_hw_ppgtt *ppgtt;

+ unsigned flags;
+#define CONTEXT_NO_ZEROMAP (1<<0)
+#define CONTEXT_NO_ERROR_CAPTURE (1<<1)
+
/* Legacy ring buffer submission */
struct {
struct drm_i915_gem_object *rcs_state;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index e5e9a8918f19..0aea5ccf6d68 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -939,6 +939,9 @@ int i915_gem_context_getparam_ioctl(struct drm_device *dev, void *data,
else
args->value = to_i915(dev)->gtt.base.total;
break;
+ case I915_CONTEXT_PARAM_NO_ERROR_CAPTURE:
+ args->value = !!(ctx->flags & CONTEXT_NO_ERROR_CAPTURE);
+ break;
default:
ret = -EINVAL;
break;
@@ -984,6 +987,16 @@ int i915_gem_context_setparam_ioctl(struct drm_device *dev, void *data,
ctx->flags |= args->value ? CONTEXT_NO_ZEROMAP : 0;
}
break;
+ case I915_CONTEXT_PARAM_NO_ERROR_CAPTURE:
+ if (args->size) {
+ ret = -EINVAL;
+ } else {
+ if (args->value)
+ ctx->flags |= CONTEXT_NO_ERROR_CAPTURE;
+ else
+ ctx->flags &= ~CONTEXT_NO_ERROR_CAPTURE;
+ }
+ break;
default:
ret = -EINVAL;
break;
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 93da2c7581f6..4f17d6847569 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1040,6 +1040,8 @@ static void i915_gem_record_rings(struct drm_device *dev,
rcu_read_unlock();
}

+ error->simulated |= request->ctx->flags & CONTEXT_NO_ERROR_CAPTURE;
+
rb = request->ringbuf;
error->ring[i].cpu_ring_head = rb->head;
error->ring[i].cpu_ring_tail = rb->tail;
@@ -1333,12 +1335,14 @@ void i915_capture_error_state(struct drm_device *dev, bool wedged,
i915_error_capture_msg(dev, error, wedged, error_msg);
DRM_INFO("%s\n", error->error_msg);

- spin_lock_irqsave(&dev_priv->gpu_error.lock, flags);
- if (dev_priv->gpu_error.first_error == NULL) {
- dev_priv->gpu_error.first_error = error;
- error = NULL;
+ if (!error->simulated) {
+ spin_lock_irqsave(&dev_priv->gpu_error.lock, flags);
+ if (dev_priv->gpu_error.first_error == NULL) {
+ dev_priv->gpu_error.first_error = error;
+ error = NULL;
+ }
+ spin_unlock_irqrestore(&dev_priv->gpu_error.lock, flags);
}
- spin_unlock_irqrestore(&dev_priv->gpu_error.lock, flags);

if (error) {
i915_error_state_free(&error->ref);
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index acf21026c78a..7fee4416dcc7 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -1140,6 +1140,7 @@ struct drm_i915_gem_context_param {
#define I915_CONTEXT_PARAM_BAN_PERIOD 0x1
#define I915_CONTEXT_PARAM_NO_ZEROMAP 0x2
#define I915_CONTEXT_PARAM_GTT_SIZE 0x3
+#define I915_CONTEXT_PARAM_NO_ERROR_CAPTURE 0x4
__u64 value;
};
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:30 UTC
Permalink
In order to simplify the next couple of patches, extract the
lazy_coherency optimisation our of the engine->get_seqno() vfunc into
its own callback.

v2: Rename the barrier to engine->irq_seqno_barrier to try and better
reflect that the barrier is only required after the user interrupt before
reading the seqno (to ensure that the seqno update lands in time as we
do not have strict seqno-irq ordering on all platforms).

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 6 ++---
drivers/gpu/drm/i915/i915_drv.h | 12 ++++++----
drivers/gpu/drm/i915/i915_gpu_error.c | 2 +-
drivers/gpu/drm/i915/i915_irq.c | 4 ++--
drivers/gpu/drm/i915/i915_trace.h | 2 +-
drivers/gpu/drm/i915/intel_breadcrumbs.c | 4 ++--
drivers/gpu/drm/i915/intel_lrc.c | 39 ++++++++++++--------------------
drivers/gpu/drm/i915/intel_ringbuffer.c | 36 +++++++++++++++--------------
drivers/gpu/drm/i915/intel_ringbuffer.h | 4 ++--
9 files changed, 53 insertions(+), 56 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 9396597b136d..1499e2337e5d 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -600,7 +600,7 @@ static int i915_gem_pageflip_info(struct seq_file *m, void *data)
ring->name,
i915_gem_request_get_seqno(work->flip_queued_req),
dev_priv->next_seqno,
- ring->get_seqno(ring, true),
+ ring->get_seqno(ring),
i915_gem_request_completed(work->flip_queued_req, true));
} else
seq_printf(m, "Flip not associated with any ring\n");
@@ -734,7 +734,7 @@ static void i915_ring_seqno_info(struct seq_file *m,

if (ring->get_seqno) {
seq_printf(m, "Current sequence (%s): %x\n",
- ring->name, ring->get_seqno(ring, false));
+ ring->name, ring->get_seqno(ring));
}

spin_lock(&ring->breadcrumbs.lock);
@@ -1354,7 +1354,7 @@ static int i915_hangcheck_info(struct seq_file *m, void *unused)
intel_runtime_pm_get(dev_priv);

for_each_ring(ring, dev_priv, i) {
- seqno[i] = ring->get_seqno(ring, false);
+ seqno[i] = ring->get_seqno(ring);
acthd[i] = intel_ring_get_active_head(ring);
}

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index a9e8de57e848..9762aa76bb0a 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2972,15 +2972,19 @@ i915_seqno_passed(uint32_t seq1, uint32_t seq2)
static inline bool i915_gem_request_started(struct drm_i915_gem_request *req,
bool lazy_coherency)
{
- u32 seqno = req->ring->get_seqno(req->ring, lazy_coherency);
- return i915_seqno_passed(seqno, req->previous_seqno);
+ if (!lazy_coherency && req->ring->irq_seqno_barrier)
+ req->ring->irq_seqno_barrier(req->ring);
+ return i915_seqno_passed(req->ring->get_seqno(req->ring),
+ req->previous_seqno);
}

static inline bool i915_gem_request_completed(struct drm_i915_gem_request *req,
bool lazy_coherency)
{
- u32 seqno = req->ring->get_seqno(req->ring, lazy_coherency);
- return i915_seqno_passed(seqno, req->seqno);
+ if (!lazy_coherency && req->ring->irq_seqno_barrier)
+ req->ring->irq_seqno_barrier(req->ring);
+ return i915_seqno_passed(req->ring->get_seqno(req->ring),
+ req->seqno);
}

int __must_check i915_gem_get_seqno(struct drm_device *dev, u32 *seqno);
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index f805d117f3d1..01d0206ca4dd 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -902,8 +902,8 @@ static void i915_record_ring_state(struct drm_device *dev,

ering->waiting = intel_engine_has_waiter(ring);
ering->instpm = I915_READ(RING_INSTPM(ring->mmio_base));
- ering->seqno = ring->get_seqno(ring, false);
ering->acthd = intel_ring_get_active_head(ring);
+ ering->seqno = ring->get_seqno(ring);
ering->start = I915_READ_START(ring);
ering->head = I915_READ_HEAD(ring);
ering->tail = I915_READ_TAIL(ring);
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 95b997a57da8..d73669783045 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -2903,7 +2903,7 @@ static int semaphore_passed(struct intel_engine_cs *ring)
if (signaller->hangcheck.deadlock >= I915_NUM_RINGS)
return -1;

- if (i915_seqno_passed(signaller->get_seqno(signaller, false), seqno))
+ if (i915_seqno_passed(signaller->get_seqno(signaller), seqno))
return 1;

/* cursory check for an unkickable deadlock */
@@ -3067,8 +3067,8 @@ static void i915_hangcheck_elapsed(struct work_struct *work)

semaphore_clear_deadlocks(dev_priv);

- seqno = ring->get_seqno(ring, false);
acthd = intel_ring_get_active_head(ring);
+ seqno = ring->get_seqno(ring);

if (ring->hangcheck.seqno == seqno) {
if (ring_idle(ring, seqno)) {
diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
index 52b2d409945d..cfb5f78a6e84 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -573,7 +573,7 @@ TRACE_EVENT(i915_gem_request_notify,
TP_fast_assign(
__entry->dev = ring->dev->primary->index;
__entry->ring = ring->id;
- __entry->seqno = ring->get_seqno(ring, false);
+ __entry->seqno = ring->get_seqno(ring);
),

TP_printk("dev=%u, ring=%u, seqno=%u",
diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
index 9f756583a44e..10b0add54acf 100644
--- a/drivers/gpu/drm/i915/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
@@ -127,7 +127,7 @@ bool intel_engine_add_wait(struct intel_engine_cs *engine,
struct intel_wait *wait)
{
struct intel_breadcrumbs *b = &engine->breadcrumbs;
- u32 seqno = engine->get_seqno(engine, true);
+ u32 seqno = engine->get_seqno(engine);
struct rb_node **p, *parent, *completed;
bool first;

@@ -269,7 +269,7 @@ void intel_engine_remove_wait(struct intel_engine_cs *engine,
* the first_waiter. This is undesirable if that
* waiter is a high priority task.
*/
- u32 seqno = engine->get_seqno(engine, true);
+ u32 seqno = engine->get_seqno(engine);
while (i915_seqno_passed(seqno,
to_wait(next)->seqno)) {
struct rb_node *n = rb_next(next);
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 16fa58a0a930..333e95bda78a 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1775,7 +1775,7 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
return 0;
}

-static u32 gen8_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency)
+static u32 gen8_get_seqno(struct intel_engine_cs *ring)
{
return intel_read_status_page(ring, I915_GEM_HWS_INDEX);
}
@@ -1785,9 +1785,8 @@ static void gen8_set_seqno(struct intel_engine_cs *ring, u32 seqno)
intel_write_status_page(ring, I915_GEM_HWS_INDEX, seqno);
}

-static u32 bxt_a_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency)
+static void bxt_seqno_barrier(struct intel_engine_cs *ring)
{
-
/*
* On BXT A steppings there is a HW coherency issue whereby the
* MI_STORE_DATA_IMM storing the completed request's seqno
@@ -1798,11 +1797,7 @@ static u32 bxt_a_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency)
* bxt_a_set_seqno(), where we also do a clflush after the write. So
* this clflush in practice becomes an invalidate operation.
*/
-
- if (!lazy_coherency)
- intel_flush_status_page(ring, I915_GEM_HWS_INDEX);
-
- return intel_read_status_page(ring, I915_GEM_HWS_INDEX);
+ intel_flush_status_page(ring, I915_GEM_HWS_INDEX);
}

static void bxt_a_set_seqno(struct intel_engine_cs *ring, u32 seqno)
@@ -2007,12 +2002,11 @@ static int logical_render_ring_init(struct drm_device *dev)
ring->init_hw = gen8_init_render_ring;
ring->init_context = gen8_init_rcs_context;
ring->cleanup = intel_fini_pipe_control;
+ ring->get_seqno = gen8_get_seqno;
+ ring->set_seqno = gen8_set_seqno;
if (IS_BXT_REVID(dev, 0, BXT_REVID_A1)) {
- ring->get_seqno = bxt_a_get_seqno;
+ ring->irq_seqno_barrier = bxt_seqno_barrier;
ring->set_seqno = bxt_a_set_seqno;
- } else {
- ring->get_seqno = gen8_get_seqno;
- ring->set_seqno = gen8_set_seqno;
}
ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush_render;
@@ -2059,12 +2053,11 @@ static int logical_bsd_ring_init(struct drm_device *dev)
GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS1_IRQ_SHIFT;

ring->init_hw = gen8_init_common_ring;
+ ring->get_seqno = gen8_get_seqno;
+ ring->set_seqno = gen8_set_seqno;
if (IS_BXT_REVID(dev, 0, BXT_REVID_A1)) {
- ring->get_seqno = bxt_a_get_seqno;
+ ring->irq_seqno_barrier = bxt_seqno_barrier;
ring->set_seqno = bxt_a_set_seqno;
- } else {
- ring->get_seqno = gen8_get_seqno;
- ring->set_seqno = gen8_set_seqno;
}
ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush;
@@ -2114,12 +2107,11 @@ static int logical_blt_ring_init(struct drm_device *dev)
GT_CONTEXT_SWITCH_INTERRUPT << GEN8_BCS_IRQ_SHIFT;

ring->init_hw = gen8_init_common_ring;
+ ring->get_seqno = gen8_get_seqno;
+ ring->set_seqno = gen8_set_seqno;
if (IS_BXT_REVID(dev, 0, BXT_REVID_A1)) {
- ring->get_seqno = bxt_a_get_seqno;
+ ring->irq_seqno_barrier = bxt_seqno_barrier;
ring->set_seqno = bxt_a_set_seqno;
- } else {
- ring->get_seqno = gen8_get_seqno;
- ring->set_seqno = gen8_set_seqno;
}
ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush;
@@ -2144,12 +2136,11 @@ static int logical_vebox_ring_init(struct drm_device *dev)
GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VECS_IRQ_SHIFT;

ring->init_hw = gen8_init_common_ring;
+ ring->get_seqno = gen8_get_seqno;
+ ring->set_seqno = gen8_set_seqno;
if (IS_BXT_REVID(dev, 0, BXT_REVID_A1)) {
- ring->get_seqno = bxt_a_get_seqno;
+ ring->irq_seqno_barrier = bxt_seqno_barrier;
ring->set_seqno = bxt_a_set_seqno;
- } else {
- ring->get_seqno = gen8_get_seqno;
- ring->set_seqno = gen8_set_seqno;
}
ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 60b0df2c5399..57ec21c5b1ab 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1485,8 +1485,8 @@ pc_render_add_request(struct drm_i915_gem_request *req)
return 0;
}

-static u32
-gen6_ring_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency)
+static void
+gen6_seqno_barrier(struct intel_engine_cs *ring)
{
/* Workaround to force correct ordering between irq and seqno writes on
* ivb (and maybe also on snb) by reading from a CS register (like
@@ -1500,18 +1500,14 @@ gen6_ring_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency)
* a delay after every batch i.e. much more frequent than a delay
* when waiting for the interrupt (with the same net latency).
*/
- if (!lazy_coherency) {
- struct drm_i915_private *dev_priv = ring->dev->dev_private;
- POSTING_READ_FW(RING_ACTHD(ring->mmio_base));
-
- intel_flush_status_page(ring, I915_GEM_HWS_INDEX);
- }
+ struct drm_i915_private *dev_priv = ring->i915;
+ POSTING_READ_FW(RING_ACTHD(ring->mmio_base));

- return intel_read_status_page(ring, I915_GEM_HWS_INDEX);
+ intel_flush_status_page(ring, I915_GEM_HWS_INDEX);
}

static u32
-ring_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency)
+ring_get_seqno(struct intel_engine_cs *ring)
{
return intel_read_status_page(ring, I915_GEM_HWS_INDEX);
}
@@ -1523,7 +1519,7 @@ ring_set_seqno(struct intel_engine_cs *ring, u32 seqno)
}

static u32
-pc_render_get_seqno(struct intel_engine_cs *ring, bool lazy_coherency)
+pc_render_get_seqno(struct intel_engine_cs *ring)
{
return ring->scratch.cpu_page[0];
}
@@ -2698,7 +2694,8 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
ring->irq_get = gen8_ring_get_irq;
ring->irq_put = gen8_ring_put_irq;
ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
- ring->get_seqno = gen6_ring_get_seqno;
+ ring->irq_seqno_barrier = gen6_seqno_barrier;
+ ring->get_seqno = ring_get_seqno;
ring->set_seqno = ring_set_seqno;
if (i915_semaphore_is_enabled(dev)) {
WARN_ON(!dev_priv->semaphore_obj);
@@ -2715,7 +2712,8 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
ring->irq_get = gen6_ring_get_irq;
ring->irq_put = gen6_ring_put_irq;
ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
- ring->get_seqno = gen6_ring_get_seqno;
+ ring->irq_seqno_barrier = gen6_seqno_barrier;
+ ring->get_seqno = ring_get_seqno;
ring->set_seqno = ring_set_seqno;
if (i915_semaphore_is_enabled(dev)) {
ring->semaphore.sync_to = gen6_ring_sync;
@@ -2829,7 +2827,8 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
ring->write_tail = gen6_bsd_ring_write_tail;
ring->flush = gen6_bsd_ring_flush;
ring->add_request = gen6_add_request;
- ring->get_seqno = gen6_ring_get_seqno;
+ ring->irq_seqno_barrier = gen6_seqno_barrier;
+ ring->get_seqno = ring_get_seqno;
ring->set_seqno = ring_set_seqno;
if (INTEL_INFO(dev)->gen >= 8) {
ring->irq_enable_mask =
@@ -2901,7 +2900,8 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev)
ring->mmio_base = GEN8_BSD2_RING_BASE;
ring->flush = gen6_bsd_ring_flush;
ring->add_request = gen6_add_request;
- ring->get_seqno = gen6_ring_get_seqno;
+ ring->irq_seqno_barrier = gen6_seqno_barrier;
+ ring->get_seqno = ring_get_seqno;
ring->set_seqno = ring_set_seqno;
ring->irq_enable_mask =
GT_RENDER_USER_INTERRUPT << GEN8_VCS2_IRQ_SHIFT;
@@ -2931,7 +2931,8 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
ring->write_tail = ring_write_tail;
ring->flush = gen6_ring_flush;
ring->add_request = gen6_add_request;
- ring->get_seqno = gen6_ring_get_seqno;
+ ring->irq_seqno_barrier = gen6_seqno_barrier;
+ ring->get_seqno = ring_get_seqno;
ring->set_seqno = ring_set_seqno;
if (INTEL_INFO(dev)->gen >= 8) {
ring->irq_enable_mask =
@@ -2988,7 +2989,8 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
ring->write_tail = ring_write_tail;
ring->flush = gen6_ring_flush;
ring->add_request = gen6_add_request;
- ring->get_seqno = gen6_ring_get_seqno;
+ ring->irq_seqno_barrier = gen6_seqno_barrier;
+ ring->get_seqno = ring_get_seqno;
ring->set_seqno = ring_set_seqno;

if (INTEL_INFO(dev)->gen >= 8) {
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 51fcb66bfc4a..3b49726b1732 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -219,8 +219,8 @@ struct intel_engine_cs {
* seen value is good enough. Note that the seqno will always be
* monotonic, even if not coherent.
*/
- u32 (*get_seqno)(struct intel_engine_cs *ring,
- bool lazy_coherency);
+ void (*irq_seqno_barrier)(struct intel_engine_cs *ring);
+ u32 (*get_seqno)(struct intel_engine_cs *ring);
void (*set_seqno)(struct intel_engine_cs *ring,
u32 seqno);
int (*dispatch_execbuffer)(struct drm_i915_gem_request *req,
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:48 UTC
Permalink
Describe the intent of boosting the GPU frequency to maximum before
waiting on the GPU.

RPS waitboosting was introduced with

commit b29c19b645287f7062e17d70fa4e9781a01a5d88
Author: Chris Wilson <***@chris-wilson.co.uk>
Date: Wed Sep 25 17:34:56 2013 +0100

drm/i915: Boost RPS frequency for CPU stalls

but lacked a concise comment in the code to explain itself.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <***@ffwll.ch>
---
drivers/gpu/drm/i915/i915_gem.c | 16 ++++++++++++++++
1 file changed, 16 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 3fea582768e9..3948e85eaa48 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1244,6 +1244,22 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
}

trace_i915_gem_request_wait_begin(req);
+
+ /* This client is about to stall waiting for the GPU. In many cases
+ * this is undesirable and limits the throughput of the system, as
+ * many clients cannot continue processing user input/output whilst
+ * blocked. RPS autotuning may take tens of milliseconds to respond
+ * to the GPU load and thus incurs additional latency for the client.
+ * We can circumvent that by promoting the GPU frequency to maximum
+ * before we wait. This makes the GPU throttle up much more quickly
+ * (good for benchmarks and user experience, e.g. window animations),
+ * but at a cost of spending more power processing the workload
+ * (bad for battery). Not all clients even want their results
+ * immediately and for them we should just let the GPU select its own
+ * frequency to maximise efficiency. To prevent a single client from
+ * forcing the clocks too high for the whole system, we only allow
+ * each client to waitboost once in a busy period.
+ */
if (INTEL_INFO(req->i915)->gen >= 6)
gen6_rps_boost(req->i915, rps, req->emitted_jiffies);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:05 UTC
Permalink
In a few frequent cases, having a direct pointer to the drm_i915_private
from the request is very useful.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem.c | 7 +++---
drivers/gpu/drm/i915/i915_gem_context.c | 21 +++++++++---------
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 3 +--
drivers/gpu/drm/i915/i915_gem_request.c | 2 +-
drivers/gpu/drm/i915/intel_lrc.c | 6 ++----
drivers/gpu/drm/i915/intel_pm.c | 3 +--
drivers/gpu/drm/i915/intel_ringbuffer.c | 34 ++++++++++++------------------
7 files changed, 32 insertions(+), 44 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 31926a4fb42a..c2a1ec8abc11 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2568,7 +2568,7 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
return 0;

if (!i915.semaphores) {
- struct drm_i915_private *i915 = to_i915(obj->base.dev);
+ struct drm_i915_private *i915 = from_req->i915;
ret = __i915_wait_request(from_req,
i915->mm.interruptible,
NULL,
@@ -4069,12 +4069,11 @@ err:
int i915_gem_l3_remap(struct drm_i915_gem_request *req, int slice)
{
struct intel_engine_cs *ring = req->ring;
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = req->i915;
u32 *remap_info = dev_priv->l3_parity.remap_info[slice];
int i, ret;

- if (!HAS_L3_DPF(dev) || !remap_info)
+ if (!HAS_L3_DPF(dev_priv) || !remap_info)
return 0;

ret = intel_ring_begin(req, GEN7_L3LOG_SIZE / 4 * 3);
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 361be1085a18..3e3b4bf3fed1 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -524,7 +524,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
const int num_rings =
/* Use an extended w/a on ivb+ if signalling from other rings */
i915.semaphores ?
- hweight32(INTEL_INFO(ring->dev)->ring_mask) - 1 :
+ hweight32(INTEL_INFO(req->i915)->ring_mask) - 1 :
0;
int len, i, ret;

@@ -533,21 +533,21 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
* explicitly, so we rely on the value at ring init, stored in
* itlb_before_ctx_switch.
*/
- if (IS_GEN6(ring->dev)) {
+ if (IS_GEN6(req->i915)) {
ret = ring->flush(req, I915_GEM_GPU_DOMAINS, 0);
if (ret)
return ret;
}

/* These flags are for resource streamer on HSW+ */
- if (IS_HASWELL(ring->dev) || INTEL_INFO(ring->dev)->gen >= 8)
+ if (IS_HASWELL(req->i915) || INTEL_INFO(req->i915)->gen >= 8)
flags |= (HSW_MI_RS_SAVE_STATE_EN | HSW_MI_RS_RESTORE_STATE_EN);
- else if (INTEL_INFO(ring->dev)->gen < 8)
+ else if (INTEL_INFO(req->i915)->gen < 8)
flags |= (MI_SAVE_EXT_STATE_EN | MI_RESTORE_EXT_STATE_EN);


len = 4;
- if (INTEL_INFO(ring->dev)->gen >= 7)
+ if (INTEL_INFO(req->i915)->gen >= 7)
len += 2 + (num_rings ? 4*num_rings + 2 : 0);

ret = intel_ring_begin(req, len);
@@ -555,13 +555,13 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
return ret;

/* WaProgramMiArbOnOffAroundMiSetContext:ivb,vlv,hsw,bdw,chv */
- if (INTEL_INFO(ring->dev)->gen >= 7) {
+ if (INTEL_INFO(req->i915)->gen >= 7) {
intel_ring_emit(ring, MI_ARB_ON_OFF | MI_ARB_DISABLE);
if (num_rings) {
struct intel_engine_cs *signaller;

intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(num_rings));
- for_each_ring(signaller, to_i915(ring->dev), i) {
+ for_each_ring(signaller, req->i915, i) {
if (signaller == ring)
continue;

@@ -581,12 +581,12 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
*/
intel_ring_emit(ring, MI_NOOP);

- if (INTEL_INFO(ring->dev)->gen >= 7) {
+ if (INTEL_INFO(req->i915)->gen >= 7) {
if (num_rings) {
struct intel_engine_cs *signaller;

intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(num_rings));
- for_each_ring(signaller, to_i915(ring->dev), i) {
+ for_each_ring(signaller, req->i915, i) {
if (signaller == ring)
continue;

@@ -827,10 +827,9 @@ unpin_out:
int i915_switch_context(struct drm_i915_gem_request *req)
{
struct intel_engine_cs *ring = req->ring;
- struct drm_i915_private *dev_priv = ring->dev->dev_private;

WARN_ON(i915.enable_execlists);
- WARN_ON(!mutex_is_locked(&dev_priv->dev->struct_mutex));
+ WARN_ON(!mutex_is_locked(&req->i915->dev->struct_mutex));

if (req->ctx->legacy_hw_ctx.rcs_state == NULL) { /* We have the fake context */
if (req->ctx != ring->last_context) {
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index dfabeee2ff0b..78b462956c78 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1099,7 +1099,6 @@ void
i915_gem_execbuffer_move_to_active(struct list_head *vmas,
struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *ring = i915_gem_request_get_ring(req);
struct i915_vma *vma;

list_for_each_entry(vma, vmas, exec_list) {
@@ -1126,7 +1125,7 @@ i915_gem_execbuffer_move_to_active(struct list_head *vmas,
if (entry->flags & EXEC_OBJECT_NEEDS_FENCE) {
i915_gem_request_assign(&obj->last_fenced_req, req);
if (entry->flags & __EXEC_OBJECT_HAS_FENCE) {
- struct drm_i915_private *dev_priv = to_i915(ring->dev);
+ struct drm_i915_private *dev_priv = req->i915;
list_move_tail(&dev_priv->fence_regs[obj->fence_reg].lru_list,
&dev_priv->mm.fence_list);
}
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 01893d847dfd..619a9b063d9c 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -199,7 +199,7 @@ int i915_gem_request_alloc(struct intel_engine_cs *ring,
struct intel_context *ctx,
struct drm_i915_gem_request **req_out)
{
- struct drm_i915_private *dev_priv = to_i915(ring->dev);
+ struct drm_i915_private *dev_priv = ring->i915;
unsigned reset_counter = i915_reset_counter(&dev_priv->gpu_error);
struct drm_i915_gem_request *req;
u32 seqno;
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 527eaf59be25..a369aa041522 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -329,8 +329,7 @@ static void execlists_elsp_write(struct drm_i915_gem_request *rq0,
{

struct intel_engine_cs *ring = rq0->ring;
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = rq0->i915;
uint64_t desc[2];

if (rq1) {
@@ -1094,8 +1093,7 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
int ret, i;
struct intel_engine_cs *ring = req->ring;
struct intel_ringbuffer *ringbuf = req->ringbuf;
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = req->i915;
struct i915_workarounds *w = &dev_priv->workarounds;

if (w->count == 0)
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index b340f2a1f110..a082b4577599 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -7286,8 +7286,7 @@ static void __intel_rps_boost_work(struct work_struct *work)
struct drm_i915_gem_request *req = boost->req;

if (!i915_gem_request_completed(req))
- gen6_rps_boost(to_i915(req->ring->dev), NULL,
- req->emitted_jiffies);
+ gen6_rps_boost(req->i915, NULL, req->emitted_jiffies);

i915_gem_request_put(req);
kfree(boost);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index e143da96dcfa..d17dd33ee94c 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -99,7 +99,6 @@ gen4_render_ring_flush(struct drm_i915_gem_request *req,
u32 flush_domains)
{
struct intel_engine_cs *ring = req->ring;
- struct drm_device *dev = ring->dev;
u32 cmd;
int ret;

@@ -138,7 +137,7 @@ gen4_render_ring_flush(struct drm_i915_gem_request *req,
cmd |= MI_EXE_FLUSH;

if (invalidate_domains & I915_GEM_DOMAIN_COMMAND &&
- (IS_G4X(dev) || IS_GEN5(dev)))
+ (IS_G4X(req->i915) || IS_GEN5(req->i915)))
cmd |= MI_INVALIDATE_ISP;

ret = intel_ring_begin(req, 2);
@@ -691,8 +690,7 @@ static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
{
int ret, i;
struct intel_engine_cs *ring = req->ring;
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = req->i915;
struct i915_workarounds *w = &dev_priv->workarounds;

if (w->count == 0)
@@ -1194,12 +1192,11 @@ static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
{
#define MBOX_UPDATE_DWORDS 8
struct intel_engine_cs *signaller = signaller_req->ring;
- struct drm_device *dev = signaller->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = signaller_req->i915;
struct intel_engine_cs *waiter;
int i, ret, num_rings;

- num_rings = hweight32(INTEL_INFO(dev)->ring_mask);
+ num_rings = hweight32(INTEL_INFO(dev_priv)->ring_mask);
num_dwords += (num_rings-1) * MBOX_UPDATE_DWORDS;
#undef MBOX_UPDATE_DWORDS

@@ -1233,12 +1230,11 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
{
#define MBOX_UPDATE_DWORDS 6
struct intel_engine_cs *signaller = signaller_req->ring;
- struct drm_device *dev = signaller->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = signaller_req->i915;
struct intel_engine_cs *waiter;
int i, ret, num_rings;

- num_rings = hweight32(INTEL_INFO(dev)->ring_mask);
+ num_rings = hweight32(INTEL_INFO(dev_priv)->ring_mask);
num_dwords += (num_rings-1) * MBOX_UPDATE_DWORDS;
#undef MBOX_UPDATE_DWORDS

@@ -1269,13 +1265,12 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
unsigned int num_dwords)
{
struct intel_engine_cs *signaller = signaller_req->ring;
- struct drm_device *dev = signaller->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = signaller_req->i915;
struct intel_engine_cs *useless;
int i, ret, num_rings;

#define MBOX_UPDATE_DWORDS 3
- num_rings = hweight32(INTEL_INFO(dev)->ring_mask);
+ num_rings = hweight32(INTEL_INFO(dev_priv)->ring_mask);
num_dwords += round_up((num_rings-1) * MBOX_UPDATE_DWORDS, 2);
#undef MBOX_UPDATE_DWORDS

@@ -1352,7 +1347,7 @@ gen8_ring_sync(struct drm_i915_gem_request *waiter_req,
u32 seqno)
{
struct intel_engine_cs *waiter = waiter_req->ring;
- struct drm_i915_private *dev_priv = waiter->dev->dev_private;
+ struct drm_i915_private *dev_priv = waiter_req->i915;
int ret;

ret = intel_ring_begin(waiter_req, 4);
@@ -2120,7 +2115,7 @@ int intel_ring_idle(struct intel_engine_cs *ring)

/* Make sure we do not trigger any retires */
return __i915_wait_request(req,
- to_i915(ring->dev)->mm.interruptible,
+ req->i915->mm.interruptible,
NULL, NULL);
}

@@ -2248,7 +2243,7 @@ int intel_ring_begin(struct drm_i915_gem_request *req,

WARN_ON(req == NULL);
ring = req->ring;
- dev_priv = ring->dev->dev_private;
+ dev_priv = req->i915;

ret = __intel_ring_prepare(ring, num_dwords * sizeof(uint32_t));
if (ret)
@@ -2383,7 +2378,7 @@ gen8_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
unsigned dispatch_flags)
{
struct intel_engine_cs *ring = req->ring;
- bool ppgtt = USES_PPGTT(ring->dev) &&
+ bool ppgtt = USES_PPGTT(req->i915) &&
!(dispatch_flags & I915_DISPATCH_SECURE);
int ret;

@@ -2457,7 +2452,6 @@ static int gen6_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate, u32 flush)
{
struct intel_engine_cs *ring = req->ring;
- struct drm_device *dev = ring->dev;
uint32_t cmd;
int ret;

@@ -2466,7 +2460,7 @@ static int gen6_ring_flush(struct drm_i915_gem_request *req,
return ret;

cmd = MI_FLUSH_DW;
- if (INTEL_INFO(dev)->gen >= 8)
+ if (INTEL_INFO(req->i915)->gen >= 8)
cmd += 1;

/* We always require a command barrier so that subsequent
@@ -2486,7 +2480,7 @@ static int gen6_ring_flush(struct drm_i915_gem_request *req,
cmd |= MI_INVALIDATE_TLB;
intel_ring_emit(ring, cmd);
intel_ring_emit(ring, I915_GEM_HWS_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT);
- if (INTEL_INFO(dev)->gen >= 8) {
+ if (INTEL_INFO(req->i915)->gen >= 8) {
intel_ring_emit(ring, 0); /* upper addr */
intel_ring_emit(ring, 0); /* value */
} else {
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:01 UTC
Permalink
We now have two implementations for vmapping a whole object, one for
dma-buf and one for the ringbuffer. If we couple the vmapping into the
obj->pages lifetime, then we can reuse an obj->vmapping for both and at
the same time couple it into the shrinker.

v2: Mark the failable kmalloc() as __GFP_NOWARN (vsyrjala)
v3: Call unpin_vmap from the right dmabuf unmapper

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_drv.h | 12 +++++---
drivers/gpu/drm/i915/i915_gem.c | 41 +++++++++++++++++++++++++
drivers/gpu/drm/i915/i915_gem_dmabuf.c | 53 ++++-----------------------------
drivers/gpu/drm/i915/intel_ringbuffer.c | 53 ++++++++++-----------------------
4 files changed, 71 insertions(+), 88 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 49a151126b2a..56cf2ffc1eac 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2114,10 +2114,7 @@ struct drm_i915_gem_object {
struct scatterlist *sg;
int last;
} get_page;
-
- /* prime dma-buf support */
- void *dma_buf_vmapping;
- int vmapping_count;
+ void *vmapping;

/** Breadcrumb of last rendering to the buffer.
* There can only be one writer, but we allow for multiple readers.
@@ -2774,12 +2771,19 @@ static inline void i915_gem_object_pin_pages(struct drm_i915_gem_object *obj)
BUG_ON(obj->pages == NULL);
obj->pages_pin_count++;
}
+
static inline void i915_gem_object_unpin_pages(struct drm_i915_gem_object *obj)
{
BUG_ON(obj->pages_pin_count == 0);
obj->pages_pin_count--;
}

+void *__must_check i915_gem_object_pin_vmap(struct drm_i915_gem_object *obj);
+static inline void i915_gem_object_unpin_vmap(struct drm_i915_gem_object *obj)
+{
+ i915_gem_object_unpin_pages(obj);
+}
+
int __must_check i915_mutex_lock_interruptible(struct drm_device *dev);
int i915_gem_object_sync(struct drm_i915_gem_object *obj,
struct intel_engine_cs *to,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 9df00e694cd9..2912e8714f5b 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1854,6 +1854,11 @@ i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
ops->put_pages(obj);
obj->pages = NULL;

+ if (obj->vmapping) {
+ vunmap(obj->vmapping);
+ obj->vmapping = NULL;
+ }
+
i915_gem_object_invalidate(obj);

return 0;
@@ -2019,6 +2024,42 @@ i915_gem_object_get_pages(struct drm_i915_gem_object *obj)
return 0;
}

+void *i915_gem_object_pin_vmap(struct drm_i915_gem_object *obj)
+{
+ int ret;
+
+ ret = i915_gem_object_get_pages(obj);
+ if (ret)
+ return ERR_PTR(ret);
+
+ i915_gem_object_pin_pages(obj);
+
+ if (obj->vmapping == NULL) {
+ struct sg_page_iter sg_iter;
+ struct page **pages;
+ int n;
+
+ n = obj->base.size >> PAGE_SHIFT;
+ pages = kmalloc(n*sizeof(*pages), GFP_TEMPORARY | __GFP_NOWARN);
+ if (pages == NULL)
+ pages = drm_malloc_ab(n, sizeof(*pages));
+ if (pages != NULL) {
+ n = 0;
+ for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0)
+ pages[n++] = sg_page_iter_page(&sg_iter);
+
+ obj->vmapping = vmap(pages, n, 0, PAGE_KERNEL);
+ drm_free_large(pages);
+ }
+ if (obj->vmapping == NULL) {
+ i915_gem_object_unpin_pages(obj);
+ return ERR_PTR(-ENOMEM);
+ }
+ }
+
+ return obj->vmapping;
+}
+
void i915_vma_move_to_active(struct i915_vma *vma,
struct drm_i915_gem_request *req)
{
diff --git a/drivers/gpu/drm/i915/i915_gem_dmabuf.c b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
index e9c2bfd85b52..8894648acee0 100644
--- a/drivers/gpu/drm/i915/i915_gem_dmabuf.c
+++ b/drivers/gpu/drm/i915/i915_gem_dmabuf.c
@@ -95,14 +95,12 @@ static void i915_gem_unmap_dma_buf(struct dma_buf_attachment *attachment,
{
struct drm_i915_gem_object *obj = dma_buf_to_obj(attachment->dmabuf);

- mutex_lock(&obj->base.dev->struct_mutex);
-
dma_unmap_sg(attachment->dev, sg->sgl, sg->nents, dir);
sg_free_table(sg);
kfree(sg);

+ mutex_lock(&obj->base.dev->struct_mutex);
i915_gem_object_unpin_pages(obj);
-
mutex_unlock(&obj->base.dev->struct_mutex);
}

@@ -110,51 +108,17 @@ static void *i915_gem_dmabuf_vmap(struct dma_buf *dma_buf)
{
struct drm_i915_gem_object *obj = dma_buf_to_obj(dma_buf);
struct drm_device *dev = obj->base.dev;
- struct sg_page_iter sg_iter;
- struct page **pages;
- int ret, i;
+ void *addr;
+ int ret;

ret = i915_mutex_lock_interruptible(dev);
if (ret)
return ERR_PTR(ret);

- if (obj->dma_buf_vmapping) {
- obj->vmapping_count++;
- goto out_unlock;
- }
-
- ret = i915_gem_object_get_pages(obj);
- if (ret)
- goto err;
-
- i915_gem_object_pin_pages(obj);
-
- ret = -ENOMEM;
-
- pages = drm_malloc_ab(obj->base.size >> PAGE_SHIFT, sizeof(*pages));
- if (pages == NULL)
- goto err_unpin;
-
- i = 0;
- for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0)
- pages[i++] = sg_page_iter_page(&sg_iter);
-
- obj->dma_buf_vmapping = vmap(pages, i, 0, PAGE_KERNEL);
- drm_free_large(pages);
-
- if (!obj->dma_buf_vmapping)
- goto err_unpin;
-
- obj->vmapping_count = 1;
-out_unlock:
+ addr = i915_gem_object_pin_vmap(obj);
mutex_unlock(&dev->struct_mutex);
- return obj->dma_buf_vmapping;

-err_unpin:
- i915_gem_object_unpin_pages(obj);
-err:
- mutex_unlock(&dev->struct_mutex);
- return ERR_PTR(ret);
+ return addr;
}

static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
@@ -163,12 +127,7 @@ static void i915_gem_dmabuf_vunmap(struct dma_buf *dma_buf, void *vaddr)
struct drm_device *dev = obj->base.dev;

mutex_lock(&dev->struct_mutex);
- if (--obj->vmapping_count == 0) {
- vunmap(obj->dma_buf_vmapping);
- obj->dma_buf_vmapping = NULL;
-
- i915_gem_object_unpin_pages(obj);
- }
+ i915_gem_object_unpin_vmap(obj);
mutex_unlock(&dev->struct_mutex);
}

diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index e8a7a1045c06..2728c0ca0871 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1852,34 +1852,12 @@ static int init_phys_status_page(struct intel_engine_cs *ring)
void intel_unpin_ringbuffer_obj(struct intel_ringbuffer *ringbuf)
{
if (HAS_LLC(ringbuf->obj->base.dev) && !ringbuf->obj->stolen)
- vunmap(ringbuf->virtual_start);
+ i915_gem_object_unpin_vmap(ringbuf->obj);
else
iounmap(ringbuf->virtual_start);
- ringbuf->virtual_start = NULL;
i915_gem_object_ggtt_unpin(ringbuf->obj);
}

-static u32 *vmap_obj(struct drm_i915_gem_object *obj)
-{
- struct sg_page_iter sg_iter;
- struct page **pages;
- void *addr;
- int i;
-
- pages = drm_malloc_ab(obj->base.size >> PAGE_SHIFT, sizeof(*pages));
- if (pages == NULL)
- return NULL;
-
- i = 0;
- for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0)
- pages[i++] = sg_page_iter_page(&sg_iter);
-
- addr = vmap(pages, i, 0, PAGE_KERNEL);
- drm_free_large(pages);
-
- return addr;
-}
-
int intel_pin_and_map_ringbuffer_obj(struct drm_device *dev,
struct intel_ringbuffer *ringbuf)
{
@@ -1893,15 +1871,14 @@ int intel_pin_and_map_ringbuffer_obj(struct drm_device *dev,
return ret;

ret = i915_gem_object_set_to_cpu_domain(obj, true);
- if (ret) {
- i915_gem_object_ggtt_unpin(obj);
- return ret;
- }
+ if (ret)
+ goto unpin;

- ringbuf->virtual_start = vmap_obj(obj);
- if (ringbuf->virtual_start == NULL) {
- i915_gem_object_ggtt_unpin(obj);
- return -ENOMEM;
+ ringbuf->virtual_start = i915_gem_object_pin_vmap(obj);
+ if (IS_ERR(ringbuf->virtual_start)) {
+ ret = PTR_ERR(ringbuf->virtual_start);
+ ringbuf->virtual_start = NULL;
+ goto unpin;
}
} else {
ret = i915_gem_obj_ggtt_pin(obj, PAGE_SIZE, PIN_MAPPABLE);
@@ -1909,20 +1886,22 @@ int intel_pin_and_map_ringbuffer_obj(struct drm_device *dev,
return ret;

ret = i915_gem_object_set_to_gtt_domain(obj, true);
- if (ret) {
- i915_gem_object_ggtt_unpin(obj);
- return ret;
- }
+ if (ret)
+ goto unpin;

ringbuf->virtual_start = ioremap_wc(dev_priv->gtt.mappable_base +
i915_gem_obj_ggtt_offset(obj), ringbuf->size);
if (ringbuf->virtual_start == NULL) {
- i915_gem_object_ggtt_unpin(obj);
- return -EINVAL;
+ ret = -ENOMEM;
+ goto unpin;
}
}

return 0;
+
+unpin:
+ i915_gem_object_ggtt_unpin(obj);
+ return ret;
}

static void intel_destroy_ringbuffer_obj(struct intel_ringbuffer *ringbuf)
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:09 UTC
Permalink
In order to disambiguate between the pointer to the intel_engine_cs
(called ring) and the intel_ringbuffer (called ringbuf), rename
s/ring/engine/.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 11 +--
drivers/gpu/drm/i915/i915_drv.h | 2 +-
drivers/gpu/drm/i915/i915_gem.c | 32 +++----
drivers/gpu/drm/i915/i915_gem_context.c | 70 +++++++-------
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 8 +-
drivers/gpu/drm/i915/i915_gem_gtt.c | 47 +++++-----
drivers/gpu/drm/i915/i915_gem_render_state.c | 18 ++--
drivers/gpu/drm/i915/i915_gem_request.c | 53 ++++-------
drivers/gpu/drm/i915/i915_gem_request.h | 10 +-
drivers/gpu/drm/i915/i915_gpu_error.c | 3 +-
drivers/gpu/drm/i915/i915_guc_submission.c | 8 +-
drivers/gpu/drm/i915/i915_trace.h | 32 +++----
drivers/gpu/drm/i915/intel_breadcrumbs.c | 2 +-
drivers/gpu/drm/i915/intel_display.c | 10 +-
drivers/gpu/drm/i915/intel_lrc.c | 134 +++++++++++++--------------
drivers/gpu/drm/i915/intel_mocs.c | 13 ++-
drivers/gpu/drm/i915/intel_ringbuffer.c | 62 ++++++-------
17 files changed, 240 insertions(+), 275 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 387ae77d3c29..018076c89247 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -185,8 +185,7 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
seq_printf(m, " (%s mappable)", s);
}
if (obj->last_write_req != NULL)
- seq_printf(m, " (%s)",
- i915_gem_request_get_ring(obj->last_write_req)->name);
+ seq_printf(m, " (%s)", obj->last_write_req->engine->name);
if (obj->frontbuffer_bits)
seq_printf(m, " (frontbuffer: 0x%03x)", obj->frontbuffer_bits);
}
@@ -593,14 +592,14 @@ static int i915_gem_pageflip_info(struct seq_file *m, void *data)
pipe, plane);
}
if (work->flip_queued_req) {
- struct intel_engine_cs *ring =
- i915_gem_request_get_ring(work->flip_queued_req);
+ struct intel_engine_cs *engine =
+ work->flip_queued_req->engine;

seq_printf(m, "Flip queued on %s at seqno %x, next seqno %x [current breadcrumb %x], completed? %d\n",
- ring->name,
+ engine->name,
i915_gem_request_get_seqno(work->flip_queued_req),
dev_priv->next_seqno,
- intel_ring_get_seqno(ring),
+ intel_ring_get_seqno(engine),
i915_gem_request_completed(work->flip_queued_req));
} else
seq_printf(m, "Flip not associated with any ring\n");
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 58e9e5e50769..baede4517c70 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3410,7 +3410,7 @@ wait_remaining_ms_from_jiffies(unsigned long timestamp_jiffies, int to_wait_ms)
}
static inline bool __i915_request_irq_complete(struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *engine = req->ring;
+ struct intel_engine_cs *engine = req->engine;

/* Before we do the heavier coherent read of the seqno,
* check the value (hopefully) in the CPU cacheline.
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 247731672cb1..6622c9bb3af8 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1122,7 +1122,7 @@ i915_gem_object_wait_rendering(struct drm_i915_gem_object *obj,
if (ret)
return ret;

- i = obj->last_write_req->ring->id;
+ i = obj->last_write_req->engine->id;
if (obj->last_read_req[i] == obj->last_write_req)
i915_gem_object_retire__read(obj, i);
else
@@ -1149,7 +1149,7 @@ static void
i915_gem_object_retire_request(struct drm_i915_gem_object *obj,
struct drm_i915_gem_request *req)
{
- int ring = req->ring->id;
+ int ring = req->engine->id;

if (obj->last_read_req[ring] == req)
i915_gem_object_retire__read(obj, ring);
@@ -2062,17 +2062,15 @@ void i915_vma_move_to_active(struct i915_vma *vma,
struct drm_i915_gem_request *req)
{
struct drm_i915_gem_object *obj = vma->obj;
- struct intel_engine_cs *ring;
-
- ring = i915_gem_request_get_ring(req);
+ struct intel_engine_cs *engine = req->engine;

/* Add a reference if we're newly entering the active list. */
if (obj->active == 0)
drm_gem_object_reference(&obj->base);
- obj->active |= intel_ring_flag(ring);
+ obj->active |= intel_ring_flag(engine);

- list_move_tail(&obj->ring_list[ring->id], &ring->active_list);
- i915_gem_request_assign(&obj->last_read_req[ring->id], req);
+ list_move_tail(&obj->ring_list[engine->id], &engine->active_list);
+ i915_gem_request_assign(&obj->last_read_req[engine->id], req);

list_move_tail(&vma->mm_list, &vma->vm->active_list);
}
@@ -2081,7 +2079,7 @@ static void
i915_gem_object_retire__write(struct drm_i915_gem_object *obj)
{
GEM_BUG_ON(obj->last_write_req == NULL);
- GEM_BUG_ON(!(obj->active & intel_ring_flag(obj->last_write_req->ring)));
+ GEM_BUG_ON(!(obj->active & intel_ring_flag(obj->last_write_req->engine)));

i915_gem_request_assign(&obj->last_write_req, NULL);
intel_fb_obj_flush(obj, true, ORIGIN_CS);
@@ -2098,7 +2096,7 @@ i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring)
list_del_init(&obj->ring_list[ring]);
i915_gem_request_assign(&obj->last_read_req[ring], NULL);

- if (obj->last_write_req && obj->last_write_req->ring->id == ring)
+ if (obj->last_write_req && obj->last_write_req->engine->id == ring)
i915_gem_object_retire__write(obj);

obj->active &= ~(1 << ring);
@@ -2560,7 +2558,7 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
struct intel_engine_cs *from;
int ret;

- from = i915_gem_request_get_ring(from_req);
+ from = from_req->engine;
if (to == from)
return 0;

@@ -3737,7 +3735,7 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
BUILD_BUG_ON(I915_NUM_RINGS > 16);
args->busy = obj->active << 16;
if (obj->last_write_req)
- args->busy |= obj->last_write_req->ring->id;
+ args->busy |= obj->last_write_req->engine->id;

unref:
drm_gem_object_unreference(&obj->base);
@@ -4068,7 +4066,6 @@ err:

int i915_gem_l3_remap(struct drm_i915_gem_request *req, int slice)
{
- struct intel_ringbuffer *ring = req->ringbuf;
struct drm_i915_private *dev_priv = req->i915;
u32 *remap_info = dev_priv->l3_parity.remap_info[slice];
int i, ret;
@@ -4086,12 +4083,11 @@ int i915_gem_l3_remap(struct drm_i915_gem_request *req, int slice)
* at initialization time.
*/
for (i = 0; i < GEN7_L3LOG_SIZE / 4; i++) {
- intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
- intel_ring_emit_reg(ring, GEN7_L3LOG(slice, i));
- intel_ring_emit(ring, remap_info[i]);
+ intel_ring_emit(req->ringbuf, MI_LOAD_REGISTER_IMM(1));
+ intel_ring_emit_reg(req->ringbuf, GEN7_L3LOG(slice, i));
+ intel_ring_emit(req->ringbuf, remap_info[i]);
}
-
- intel_ring_advance(ring);
+ intel_ring_advance(req->ringbuf);

return ret;
}
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index d58de7e084dc..dece033cf604 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -450,14 +450,14 @@ void i915_gem_context_fini(struct drm_device *dev)

int i915_gem_context_enable(struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_engine_cs *engine = req->engine;
int ret;

if (i915.enable_execlists) {
- if (ring->init_context == NULL)
+ if (engine->init_context == NULL)
return 0;

- ret = ring->init_context(req);
+ ret = engine->init_context(req);
} else
ret = i915_switch_context(req);

@@ -534,7 +534,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
* itlb_before_ctx_switch.
*/
if (IS_GEN6(req->i915)) {
- ret = req->ring->flush(req, I915_GEM_GPU_DOMAINS, 0);
+ ret = req->engine->flush(req, I915_GEM_GPU_DOMAINS, 0);
if (ret)
return ret;
}
@@ -562,7 +562,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)

intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(num_rings));
for_each_ring(signaller, req->i915, i) {
- if (signaller == req->ring)
+ if (signaller == req->engine)
continue;

intel_ring_emit_reg(ring, RING_PSMI_CTL(signaller->mmio_base));
@@ -587,7 +587,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)

intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(num_rings));
for_each_ring(signaller, req->i915, i) {
- if (signaller == req->ring)
+ if (signaller == req->engine)
continue;

intel_ring_emit_reg(ring, RING_PSMI_CTL(signaller->mmio_base));
@@ -657,24 +657,18 @@ needs_pd_load_post(struct intel_engine_cs *ring, struct intel_context *to,
static int do_switch(struct drm_i915_gem_request *req)
{
struct intel_context *to = req->ctx;
- struct intel_engine_cs *ring = req->ring;
- struct drm_i915_private *dev_priv = ring->dev->dev_private;
- struct intel_context *from = ring->last_context;
+ struct intel_engine_cs *engine = req->engine;
+ struct intel_context *from = engine->last_context;
u32 hw_flags = 0;
int ret, i;

- if (from != NULL && ring == &dev_priv->ring[RCS]) {
- BUG_ON(from->legacy_hw_ctx.rcs_state == NULL);
- BUG_ON(!i915_gem_obj_is_pinned(from->legacy_hw_ctx.rcs_state));
- }
-
- if (should_skip_switch(ring, from, to))
+ if (should_skip_switch(engine, from, to))
return 0;

/* Trying to pin first makes error handling easier. */
- if (ring == &dev_priv->ring[RCS]) {
+ if (engine->id == RCS) {
ret = i915_gem_obj_ggtt_pin(to->legacy_hw_ctx.rcs_state,
- get_context_alignment(ring->dev), 0);
+ get_context_alignment(engine->dev), 0);
if (ret)
return ret;
}
@@ -684,23 +678,23 @@ static int do_switch(struct drm_i915_gem_request *req)
* evict_everything - as a last ditch gtt defrag effort that also
* switches to the default context. Hence we need to reload from here.
*/
- from = ring->last_context;
+ from = engine->last_context;

- if (needs_pd_load_pre(ring, to)) {
+ if (needs_pd_load_pre(engine, to)) {
/* Older GENs and non render rings still want the load first,
* "PP_DCLV followed by PP_DIR_BASE register through Load
* Register Immediate commands in Ring Buffer before submitting
* a context."*/
- trace_switch_mm(ring, to);
+ trace_switch_mm(engine, to);
ret = to->ppgtt->switch_mm(to->ppgtt, req);
if (ret)
goto unpin_out;

/* Doing a PD load always reloads the page dirs */
- to->ppgtt->pd_dirty_rings &= ~intel_ring_flag(ring);
+ to->ppgtt->pd_dirty_rings &= ~intel_ring_flag(engine);
}

- if (ring != &dev_priv->ring[RCS]) {
+ if (engine->id != RCS) {
if (from)
i915_gem_context_unreference(from);
goto done;
@@ -725,14 +719,14 @@ static int do_switch(struct drm_i915_gem_request *req)
* space. This means we must enforce that a page table load
* occur when this occurs. */
} else if (to->ppgtt &&
- (intel_ring_flag(ring) & to->ppgtt->pd_dirty_rings)) {
+ (intel_ring_flag(engine) & to->ppgtt->pd_dirty_rings)) {
hw_flags |= MI_FORCE_RESTORE;
- to->ppgtt->pd_dirty_rings &= ~intel_ring_flag(ring);
+ to->ppgtt->pd_dirty_rings &= ~intel_ring_flag(engine);
}

/* We should never emit switch_mm more than once */
- WARN_ON(needs_pd_load_pre(ring, to) &&
- needs_pd_load_post(ring, to, hw_flags));
+ WARN_ON(needs_pd_load_pre(engine, to) &&
+ needs_pd_load_post(engine, to, hw_flags));

ret = mi_set_context(req, hw_flags);
if (ret)
@@ -741,8 +735,8 @@ static int do_switch(struct drm_i915_gem_request *req)
/* GEN8 does *not* require an explicit reload if the PDPs have been
* setup, and we do not wish to move them.
*/
- if (needs_pd_load_post(ring, to, hw_flags)) {
- trace_switch_mm(ring, to);
+ if (needs_pd_load_post(engine, to, hw_flags)) {
+ trace_switch_mm(engine, to);
ret = to->ppgtt->switch_mm(to->ppgtt, req);
/* The hardware context switch is emitted, but we haven't
* actually changed the state - so it's probably safe to bail
@@ -768,8 +762,8 @@ static int do_switch(struct drm_i915_gem_request *req)
}

if (!to->legacy_hw_ctx.initialized) {
- if (ring->init_context) {
- ret = ring->init_context(req);
+ if (engine->init_context) {
+ ret = engine->init_context(req);
if (ret)
goto unpin_out;
}
@@ -801,12 +795,11 @@ static int do_switch(struct drm_i915_gem_request *req)

done:
i915_gem_context_reference(to);
- ring->last_context = to;
-
+ engine->last_context = to;
return 0;

unpin_out:
- if (ring->id == RCS)
+ if (engine->id == RCS)
i915_gem_object_ggtt_unpin(to->legacy_hw_ctx.rcs_state);
return ret;
}
@@ -826,17 +819,18 @@ unpin_out:
*/
int i915_switch_context(struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *ring = req->ring;

WARN_ON(i915.enable_execlists);
WARN_ON(!mutex_is_locked(&req->i915->dev->struct_mutex));

if (req->ctx->legacy_hw_ctx.rcs_state == NULL) { /* We have the fake context */
- if (req->ctx != ring->last_context) {
+ struct intel_engine_cs *engine = req->engine;
+
+ if (req->ctx != engine->last_context) {
i915_gem_context_reference(req->ctx);
- if (ring->last_context)
- i915_gem_context_unreference(ring->last_context);
- ring->last_context = req->ctx;
+ if (engine->last_context)
+ i915_gem_context_unreference(engine->last_context);
+ engine->last_context = req->ctx;
}
return 0;
}
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 603a247ac333..e7df91f9a51f 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -942,7 +942,7 @@ static int
i915_gem_execbuffer_move_to_gpu(struct drm_i915_gem_request *req,
struct list_head *vmas)
{
- const unsigned other_rings = ~intel_ring_flag(req->ring);
+ const unsigned other_rings = ~intel_ring_flag(req->engine);
struct i915_vma *vma;
uint32_t flush_domains = 0;
bool flush_chipset = false;
@@ -952,7 +952,7 @@ i915_gem_execbuffer_move_to_gpu(struct drm_i915_gem_request *req,
struct drm_i915_gem_object *obj = vma->obj;

if (obj->active & other_rings) {
- ret = i915_gem_object_sync(obj, req->ring, &req);
+ ret = i915_gem_object_sync(obj, req->engine, &req);
if (ret)
return ret;
}
@@ -964,7 +964,7 @@ i915_gem_execbuffer_move_to_gpu(struct drm_i915_gem_request *req,
}

if (flush_chipset)
- i915_gem_chipset_flush(req->ring->dev);
+ i915_gem_chipset_flush(req->engine->dev);

if (flush_domains & I915_GEM_DOMAIN_GTT)
wmb();
@@ -1151,7 +1151,7 @@ i915_reset_gen7_sol_offsets(struct drm_i915_gem_request *req)
struct intel_ringbuffer *ring = req->ringbuf;
int ret, i;

- if (!IS_GEN7(req->i915) || req->ring->id != RCS) {
+ if (!IS_GEN7(req->i915) || req->engine->id != RCS) {
DRM_DEBUG("sol reset is gen7/rcs only\n");
return -EINVAL;
}
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 98841b05f764..cb7cb59d4c4a 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -666,10 +666,10 @@ static int gen8_write_pdp(struct drm_i915_gem_request *req,
return ret;

intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
- intel_ring_emit_reg(ring, GEN8_RING_PDP_UDW(req->ring, entry));
+ intel_ring_emit_reg(ring, GEN8_RING_PDP_UDW(req->engine, entry));
intel_ring_emit(ring, upper_32_bits(addr));
intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
- intel_ring_emit_reg(ring, GEN8_RING_PDP_LDW(req->ring, entry));
+ intel_ring_emit_reg(ring, GEN8_RING_PDP_LDW(req->engine, entry));
intel_ring_emit(ring, lower_32_bits(addr));
intel_ring_advance(ring);

@@ -1652,7 +1652,9 @@ static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
int ret;

/* NB: TLBs must be flushed and invalidated before a switch */
- ret = req->ring->flush(req, I915_GEM_GPU_DOMAINS, I915_GEM_GPU_DOMAINS);
+ ret = req->engine->flush(req,
+ I915_GEM_GPU_DOMAINS,
+ I915_GEM_GPU_DOMAINS);
if (ret)
return ret;

@@ -1661,9 +1663,9 @@ static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
return ret;

intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(2));
- intel_ring_emit_reg(ring, RING_PP_DIR_DCLV(req->ring));
+ intel_ring_emit_reg(ring, RING_PP_DIR_DCLV(req->engine));
intel_ring_emit(ring, PP_DIR_DCLV_2G);
- intel_ring_emit_reg(ring, RING_PP_DIR_BASE(req->ring));
+ intel_ring_emit_reg(ring, RING_PP_DIR_BASE(req->engine));
intel_ring_emit(ring, get_pd_offset(ppgtt));
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
@@ -1674,11 +1676,10 @@ static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
static int vgpu_mm_switch(struct i915_hw_ppgtt *ppgtt,
struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *ring = req->ring;
- struct drm_i915_private *dev_priv = to_i915(ppgtt->base.dev);
+ struct drm_i915_private *dev_priv = req->i915;

- I915_WRITE(RING_PP_DIR_DCLV(ring), PP_DIR_DCLV_2G);
- I915_WRITE(RING_PP_DIR_BASE(ring), get_pd_offset(ppgtt));
+ I915_WRITE(RING_PP_DIR_DCLV(req->engine), PP_DIR_DCLV_2G);
+ I915_WRITE(RING_PP_DIR_BASE(req->engine), get_pd_offset(ppgtt));
return 0;
}

@@ -1689,7 +1690,9 @@ static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
int ret;

/* NB: TLBs must be flushed and invalidated before a switch */
- ret = req->ring->flush(req, I915_GEM_GPU_DOMAINS, I915_GEM_GPU_DOMAINS);
+ ret = req->engine->flush(req,
+ I915_GEM_GPU_DOMAINS,
+ I915_GEM_GPU_DOMAINS);
if (ret)
return ret;

@@ -1698,16 +1701,18 @@ static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
return ret;

intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(2));
- intel_ring_emit_reg(ring, RING_PP_DIR_DCLV(req->ring));
+ intel_ring_emit_reg(ring, RING_PP_DIR_DCLV(req->engine));
intel_ring_emit(ring, PP_DIR_DCLV_2G);
- intel_ring_emit_reg(ring, RING_PP_DIR_BASE(req->ring));
+ intel_ring_emit_reg(ring, RING_PP_DIR_BASE(req->engine));
intel_ring_emit(ring, get_pd_offset(ppgtt));
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);

/* XXX: RCS is the only one to auto invalidate the TLBs? */
- if (req->ring->id != RCS) {
- ret = req->ring->flush(req, I915_GEM_GPU_DOMAINS, I915_GEM_GPU_DOMAINS);
+ if (req->engine->id != RCS) {
+ ret = req->engine->flush(req,
+ I915_GEM_GPU_DOMAINS,
+ I915_GEM_GPU_DOMAINS);
if (ret)
return ret;
}
@@ -1718,15 +1723,12 @@ static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
static int gen6_mm_switch(struct i915_hw_ppgtt *ppgtt,
struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *ring = req->ring;
- struct drm_device *dev = ppgtt->base.dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
-
+ struct drm_i915_private *dev_priv = req->i915;

- I915_WRITE(RING_PP_DIR_DCLV(ring), PP_DIR_DCLV_2G);
- I915_WRITE(RING_PP_DIR_BASE(ring), get_pd_offset(ppgtt));
+ I915_WRITE(RING_PP_DIR_DCLV(req->engine), PP_DIR_DCLV_2G);
+ I915_WRITE(RING_PP_DIR_BASE(req->engine), get_pd_offset(ppgtt));

- POSTING_READ(RING_PP_DIR_DCLV(ring));
+ POSTING_READ(RING_PP_DIR_DCLV(req->engine));

return 0;
}
@@ -2169,8 +2171,7 @@ int i915_ppgtt_init_hw(struct drm_device *dev)

int i915_ppgtt_init_ring(struct drm_i915_gem_request *req)
{
- struct drm_i915_private *dev_priv = req->ring->dev->dev_private;
- struct i915_hw_ppgtt *ppgtt = dev_priv->mm.aliasing_ppgtt;
+ struct i915_hw_ppgtt *ppgtt = req->i915->mm.aliasing_ppgtt;

if (i915.enable_execlists)
return 0;
diff --git a/drivers/gpu/drm/i915/i915_gem_render_state.c b/drivers/gpu/drm/i915/i915_gem_render_state.c
index fc7e6d5c6251..bee3f0ccd0cd 100644
--- a/drivers/gpu/drm/i915/i915_gem_render_state.c
+++ b/drivers/gpu/drm/i915/i915_gem_render_state.c
@@ -198,25 +198,25 @@ int i915_gem_render_state_init(struct drm_i915_gem_request *req)
struct render_state so;
int ret;

- ret = i915_gem_render_state_prepare(req->ring, &so);
+ ret = i915_gem_render_state_prepare(req->engine, &so);
if (ret)
return ret;

if (so.rodata == NULL)
return 0;

- ret = req->ring->dispatch_execbuffer(req, so.ggtt_offset,
- so.rodata->batch_items * 4,
- I915_DISPATCH_SECURE);
+ ret = req->engine->dispatch_execbuffer(req, so.ggtt_offset,
+ so.rodata->batch_items * 4,
+ I915_DISPATCH_SECURE);
if (ret)
goto out;

if (so.aux_batch_size > 8) {
- ret = req->ring->dispatch_execbuffer(req,
- (so.ggtt_offset +
- so.aux_batch_offset),
- so.aux_batch_size,
- I915_DISPATCH_SECURE);
+ ret = req->engine->dispatch_execbuffer(req,
+ (so.ggtt_offset +
+ so.aux_batch_offset),
+ so.aux_batch_size,
+ I915_DISPATCH_SECURE);
if (ret)
goto out;
}
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 85067069995e..8adf2c134048 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -37,7 +37,7 @@ static const char *i915_fence_get_driver_name(struct fence *fence)

static const char *i915_fence_get_timeline_name(struct fence *fence)
{
- return to_i915_request(fence)->ring->name;
+ return to_i915_request(fence)->engine->name;
}

static bool i915_fence_signaled(struct fence *fence)
@@ -90,7 +90,7 @@ static void i915_fence_timeline_value_str(struct fence *fence, char *str,
int size)
{
snprintf(str, size, "%u",
- intel_ring_get_seqno(to_i915_request(fence)->ring));
+ intel_ring_get_seqno(to_i915_request(fence)->engine));
}

static void i915_fence_release(struct fence *fence)
@@ -195,11 +195,11 @@ i915_gem_get_seqno(struct drm_i915_private *dev_priv, u32 *seqno)
return 0;
}

-int i915_gem_request_alloc(struct intel_engine_cs *ring,
+int i915_gem_request_alloc(struct intel_engine_cs *engine,
struct intel_context *ctx,
struct drm_i915_gem_request **req_out)
{
- struct drm_i915_private *dev_priv = ring->i915;
+ struct drm_i915_private *dev_priv = engine->i915;
unsigned reset_counter = i915_reset_counter(&dev_priv->gpu_error);
struct drm_i915_gem_request *req;
u32 seqno;
@@ -230,11 +230,11 @@ int i915_gem_request_alloc(struct intel_engine_cs *ring,
fence_init(&req->fence,
&i915_fence_ops,
&req->lock,
- ring->fence_context,
+ engine->fence_context,
seqno);

req->i915 = dev_priv;
- req->ring = ring;
+ req->engine = engine;
req->reset_counter = reset_counter;
req->ctx = ctx;
i915_gem_context_reference(req->ctx);
@@ -279,7 +279,6 @@ err:
int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
struct drm_file *file)
{
- struct drm_i915_private *dev_private;
struct drm_i915_file_private *file_priv;

WARN_ON(!req || !file || req->file_priv);
@@ -290,7 +289,6 @@ int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
if (req->file_priv)
return -EINVAL;

- dev_private = req->ring->dev->dev_private;
file_priv = file->driver_priv;

spin_lock(&file_priv->mm.lock);
@@ -332,7 +330,7 @@ void i915_gem_request_cancel(struct drm_i915_gem_request *req)
{
intel_ring_reserved_space_cancel(req->ringbuf);
if (i915.enable_execlists) {
- if (req->ctx != req->ring->default_context)
+ if (req->ctx != req->engine->default_context)
intel_lr_context_unpin(req);
}
__i915_gem_request_release(req);
@@ -358,7 +356,7 @@ static void i915_gem_request_retire(struct drm_i915_gem_request *request)
void
i915_gem_request_retire_upto(struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *engine = req->ring;
+ struct intel_engine_cs *engine = req->engine;
struct drm_i915_gem_request *tmp;

lockdep_assert_held(&engine->dev->struct_mutex);
@@ -403,8 +401,6 @@ void __i915_add_request(struct drm_i915_gem_request *request,
struct drm_i915_gem_object *obj,
bool flush_caches)
{
- struct intel_engine_cs *ring;
- struct drm_i915_private *dev_priv;
struct intel_ringbuffer *ringbuf;
u32 request_start;
int ret;
@@ -412,8 +408,6 @@ void __i915_add_request(struct drm_i915_gem_request *request,
if (WARN_ON(request == NULL))
return;

- ring = request->ring;
- dev_priv = ring->dev->dev_private;
ringbuf = request->ringbuf;

/*
@@ -448,9 +442,9 @@ void __i915_add_request(struct drm_i915_gem_request *request,
request->postfix = intel_ring_get_tail(ringbuf);

if (i915.enable_execlists)
- ret = ring->emit_request(request);
+ ret = request->engine->emit_request(request);
else {
- ret = ring->add_request(request);
+ ret = request->engine->add_request(request);

request->tail = intel_ring_get_tail(ringbuf);
}
@@ -468,13 +462,13 @@ void __i915_add_request(struct drm_i915_gem_request *request,
request->batch_obj = obj;

request->emitted_jiffies = jiffies;
- request->previous_seqno = ring->last_submitted_seqno;
- ring->last_submitted_seqno = request->fence.seqno;
- list_add_tail(&request->list, &ring->request_list);
+ request->previous_seqno = request->engine->last_submitted_seqno;
+ request->engine->last_submitted_seqno = request->fence.seqno;
+ list_add_tail(&request->list, &request->engine->request_list);

trace_i915_gem_request_add(request);

- i915_gem_mark_busy(dev_priv);
+ i915_gem_mark_busy(request->i915);

/* Sanity check that the reserved size was large enough. */
intel_ring_reserved_space_end(ringbuf);
@@ -627,7 +621,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
set_task_state(wait.task, state);

/* Optimistic spin for the next ~jiffie before touching IRQs */
- if (intel_engine_add_wait(req->ring, &wait)) {
+ if (intel_engine_add_wait(req->engine, &wait)) {
if (__i915_spin_request(req, &wait, state))
goto complete;

@@ -635,7 +629,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
* as we enabled it, we need to kick ourselves to do a
* coherent check on the seqno before we sleep.
*/
- if (intel_engine_enable_wait_irq(req->ring, &wait))
+ if (intel_engine_enable_wait_irq(req->engine, &wait))
goto wakeup;
}

@@ -670,7 +664,7 @@ wakeup:
}

complete:
- intel_engine_remove_wait(req->ring, &wait);
+ intel_engine_remove_wait(req->engine, &wait);
__set_task_state(wait.task, TASK_RUNNING);
trace_i915_gem_request_wait_end(req);

@@ -691,7 +685,7 @@ complete:
}

if (ret == 0 && !IS_ERR_OR_NULL(rps) &&
- req->fence.seqno == req->ring->last_submitted_seqno) {
+ req->fence.seqno == req->engine->last_submitted_seqno) {
/* The GPU is now idle and this client has stalled.
* Since no other client has submitted a request in the
* meantime, assume that this client is the only one
@@ -717,20 +711,13 @@ complete:
int
i915_wait_request(struct drm_i915_gem_request *req)
{
- struct drm_device *dev;
- struct drm_i915_private *dev_priv;
- bool interruptible;
int ret;

BUG_ON(req == NULL);

- dev = req->ring->dev;
- dev_priv = dev->dev_private;
- interruptible = dev_priv->mm.interruptible;
-
- BUG_ON(!mutex_is_locked(&dev->struct_mutex));
+ BUG_ON(!mutex_is_locked(&req->i915->dev->struct_mutex));

- ret = __i915_wait_request(req, interruptible, NULL, NULL);
+ ret = __i915_wait_request(req, req->i915->mm.interruptible, NULL, NULL);
if (ret)
return ret;

diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index 6b3de827929a..802862e5007d 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -46,7 +46,7 @@ struct drm_i915_gem_request {

/** On Which ring this request was generated */
struct drm_i915_private *i915;
- struct intel_engine_cs *ring;
+ struct intel_engine_cs *engine;
unsigned reset_counter;

/** GEM sequence number associated with the previous request,
@@ -133,9 +133,9 @@ i915_gem_request_get_seqno(struct drm_i915_gem_request *req)
}

static inline struct intel_engine_cs *
-i915_gem_request_get_ring(struct drm_i915_gem_request *req)
+i915_gem_request_get_engine(struct drm_i915_gem_request *req)
{
- return req ? req->ring : NULL;
+ return req ? req->engine : NULL;
}

static inline struct drm_i915_gem_request *
@@ -198,13 +198,13 @@ i915_seqno_passed(uint32_t seq1, uint32_t seq2)

static inline bool i915_gem_request_started(struct drm_i915_gem_request *req)
{
- return i915_seqno_passed(intel_ring_get_seqno(req->ring),
+ return i915_seqno_passed(intel_ring_get_seqno(req->engine),
req->previous_seqno);
}

static inline bool i915_gem_request_completed(struct drm_i915_gem_request *req)
{
- return i915_seqno_passed(intel_ring_get_seqno(req->ring),
+ return i915_seqno_passed(intel_ring_get_seqno(req->engine),
req->fence.seqno);
}

diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 84ce91275fdd..5bf208d8009e 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -721,8 +721,7 @@ static void capture_bo(struct drm_i915_error_buffer *err,
err->dirty = obj->dirty;
err->purgeable = obj->madv != I915_MADV_WILLNEED;
err->userptr = obj->userptr.mm != NULL;
- err->ring = obj->last_write_req ?
- i915_gem_request_get_ring(obj->last_write_req)->id : -1;
+ err->ring = obj->last_write_req ? obj->last_write_req->engine->id : -1;
err->cache_level = obj->cache_level;
}

diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
index 56d3064d32ed..eaf680ce5c9c 100644
--- a/drivers/gpu/drm/i915/i915_guc_submission.c
+++ b/drivers/gpu/drm/i915/i915_guc_submission.c
@@ -510,7 +510,7 @@ int i915_guc_wq_check_space(struct i915_guc_client *gc)
static int guc_add_workqueue_item(struct i915_guc_client *gc,
struct drm_i915_gem_request *rq)
{
- enum intel_ring_id ring_id = rq->ring->id;
+ enum intel_ring_id ring_id = rq->engine->id;
struct guc_wq_item *wqi;
void *base;
u32 tail, wq_len, wq_off, space;
@@ -548,7 +548,7 @@ static int guc_add_workqueue_item(struct i915_guc_client *gc,
WQ_NO_WCFLUSH_WAIT;

/* The GuC wants only the low-order word of the context descriptor */
- wqi->context_desc = (u32)intel_lr_context_descriptor(rq->ctx, rq->ring);
+ wqi->context_desc = (u32)intel_lr_context_descriptor(rq->ctx, rq->engine);

/* The GuC firmware wants the tail index in QWords, not bytes */
tail = rq->ringbuf->tail >> 3;
@@ -565,7 +565,7 @@ static int guc_add_workqueue_item(struct i915_guc_client *gc,
/* Update the ringbuffer pointer in a saved context image */
static void lr_context_update(struct drm_i915_gem_request *rq)
{
- enum intel_ring_id ring_id = rq->ring->id;
+ enum intel_ring_id ring_id = rq->engine->id;
struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[ring_id].state;
struct drm_i915_gem_object *rb_obj = rq->ringbuf->obj;
struct page *page;
@@ -594,7 +594,7 @@ int i915_guc_submit(struct i915_guc_client *client,
struct drm_i915_gem_request *rq)
{
struct intel_guc *guc = client->guc;
- enum intel_ring_id ring_id = rq->ring->id;
+ enum intel_ring_id ring_id = rq->engine->id;
int q_ret, b_ret;

/* Need this because of the deferred pin ctx and ring */
diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
index dc2ff5cac2f4..0204ff72b3e4 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -475,7 +475,7 @@ TRACE_EVENT(i915_gem_ring_sync_to,
TP_fast_assign(
__entry->dev = from->dev->primary->index;
__entry->sync_from = from->id;
- __entry->sync_to = to_req->ring->id;
+ __entry->sync_to = to_req->engine->id;
__entry->seqno = i915_gem_request_get_seqno(req);
),

@@ -497,11 +497,9 @@ TRACE_EVENT(i915_gem_ring_dispatch,
),

TP_fast_assign(
- struct intel_engine_cs *ring =
- i915_gem_request_get_ring(req);
- __entry->dev = ring->dev->primary->index;
- __entry->ring = ring->id;
- __entry->seqno = i915_gem_request_get_seqno(req);
+ __entry->dev = req->i915->dev->primary->index;
+ __entry->ring = req->engine->id;
+ __entry->seqno = req->fence.seqno;
__entry->flags = flags;
fence_enable_sw_signaling(&req->fence);
),
@@ -522,8 +520,8 @@ TRACE_EVENT(i915_gem_ring_flush,
),

TP_fast_assign(
- __entry->dev = req->ring->dev->primary->index;
- __entry->ring = req->ring->id;
+ __entry->dev = req->engine->dev->primary->index;
+ __entry->ring = req->engine->id;
__entry->invalidate = invalidate;
__entry->flush = flush;
),
@@ -544,11 +542,9 @@ DECLARE_EVENT_CLASS(i915_gem_request,
),

TP_fast_assign(
- struct intel_engine_cs *ring =
- i915_gem_request_get_ring(req);
- __entry->dev = ring->dev->primary->index;
- __entry->ring = ring->id;
- __entry->seqno = i915_gem_request_get_seqno(req);
+ __entry->dev = req->i915->dev->primary->index;
+ __entry->ring = req->engine->id;
+ __entry->seqno = req->fence.seqno;
),

TP_printk("dev=%u, ring=%u, seqno=%u",
@@ -608,13 +604,11 @@ TRACE_EVENT(i915_gem_request_wait_begin,
* less desirable.
*/
TP_fast_assign(
- struct intel_engine_cs *ring =
- i915_gem_request_get_ring(req);
- __entry->dev = ring->dev->primary->index;
- __entry->ring = ring->id;
- __entry->seqno = i915_gem_request_get_seqno(req);
+ __entry->dev = req->i915->dev->primary->index;
+ __entry->ring = req->engine->id;
+ __entry->seqno = req->fence.seqno;
__entry->blocking =
- mutex_is_locked(&ring->dev->struct_mutex);
+ mutex_is_locked(&req->i915->dev->struct_mutex);
),

TP_printk("dev=%u, ring=%u, seqno=%u, blocking=%s",
diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
index aca1b72edcd8..5ba8b4cd8a18 100644
--- a/drivers/gpu/drm/i915/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
@@ -419,7 +419,7 @@ static int intel_breadcrumbs_signaler(void *arg)

int intel_engine_enable_signaling(struct drm_i915_gem_request *request)
{
- struct intel_engine_cs *engine = request->ring;
+ struct intel_engine_cs *engine = request->engine;
struct intel_breadcrumbs *b = &engine->breadcrumbs;
struct rb_node *parent, **p;
struct task_struct *task;
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index b28e783f6f04..323b0d905c89 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11215,7 +11215,7 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
}

len = 4;
- if (req->ring->id == RCS) {
+ if (req->engine->id == RCS) {
len += 6;
/*
* On Gen 8, SRM is now taking an extra dword to accommodate
@@ -11253,7 +11253,7 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
* for the RCS also doesn't appear to drop events. Setting the DERRMR
* to zero does lead to lockups within MI_DISPLAY_FLIP.
*/
- if (req->ring->id == RCS) {
+ if (req->engine->id == RCS) {
intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
intel_ring_emit_reg(ring, DERRMR);
intel_ring_emit(ring, ~(DERRMR_PIPEA_PRI_FLIP_DONE |
@@ -11266,7 +11266,7 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
intel_ring_emit(ring, MI_STORE_REGISTER_MEM |
MI_SRM_LRM_GLOBAL_GTT);
intel_ring_emit_reg(ring, DERRMR);
- intel_ring_emit(ring, req->ring->scratch.gtt_offset + 256);
+ intel_ring_emit(ring, req->engine->scratch.gtt_offset + 256);
if (IS_GEN8(req->i915)) {
intel_ring_emit(ring, 0);
intel_ring_emit(ring, MI_NOOP);
@@ -11310,7 +11310,7 @@ static bool use_mmio_flip(struct intel_engine_cs *ring,
false))
return true;
else
- return ring != i915_gem_request_get_ring(obj->last_write_req);
+ return ring != i915_gem_request_get_engine(obj->last_write_req);
}

static void skl_do_mmio_flip(struct intel_crtc *intel_crtc,
@@ -11654,7 +11654,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
} else if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev)) {
ring = &dev_priv->ring[BCS];
} else if (INTEL_INFO(dev)->gen >= 7) {
- ring = i915_gem_request_get_ring(obj->last_write_req);
+ ring = i915_gem_request_get_engine(obj->last_write_req);
if (ring == NULL || ring->id != RCS)
ring = &dev_priv->ring[BCS];
} else {
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 4f1944929330..1b70a76df31d 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -287,11 +287,9 @@ u32 intel_execlists_ctx_id(struct drm_i915_gem_object *ctx_obj)

static bool disable_lite_restore_wa(struct intel_engine_cs *ring)
{
- struct drm_device *dev = ring->dev;
-
- return (IS_SKL_REVID(dev, 0, SKL_REVID_B0) ||
- IS_BXT_REVID(dev, 0, BXT_REVID_A1)) &&
- (ring->id == VCS || ring->id == VCS2);
+ return (IS_SKL_REVID(ring->dev, 0, SKL_REVID_B0) ||
+ IS_BXT_REVID(ring->dev, 0, BXT_REVID_A1)) &&
+ (ring->id == VCS || ring->id == VCS2);
}

uint64_t intel_lr_context_descriptor(struct intel_context *ctx,
@@ -305,8 +303,8 @@ uint64_t intel_lr_context_descriptor(struct intel_context *ctx,
WARN_ON(lrca & 0xFFFFFFFF00000FFFULL);

desc = GEN8_CTX_VALID;
- desc |= GEN8_CTX_ADDRESSING_MODE(dev) << GEN8_CTX_ADDRESSING_MODE_SHIFT;
- if (IS_GEN8(ctx_obj->base.dev))
+ desc |= GEN8_CTX_ADDRESSING_MODE(ring->i915) << GEN8_CTX_ADDRESSING_MODE_SHIFT;
+ if (IS_GEN8(ring->i915))
desc |= GEN8_CTX_L3LLC_COHERENT;
desc |= GEN8_CTX_PRIVILEGE;
desc |= lrca;
@@ -328,41 +326,40 @@ static void execlists_elsp_write(struct drm_i915_gem_request *rq0,
struct drm_i915_gem_request *rq1)
{

- struct intel_engine_cs *ring = rq0->ring;
+ struct intel_engine_cs *engine = rq0->engine;
struct drm_i915_private *dev_priv = rq0->i915;
uint64_t desc[2];

if (rq1) {
- desc[1] = intel_lr_context_descriptor(rq1->ctx, rq1->ring);
+ desc[1] = intel_lr_context_descriptor(rq1->ctx, rq1->engine);
rq1->elsp_submitted++;
} else {
desc[1] = 0;
}

- desc[0] = intel_lr_context_descriptor(rq0->ctx, rq0->ring);
+ desc[0] = intel_lr_context_descriptor(rq0->ctx, rq0->engine);
rq0->elsp_submitted++;

/* You must always write both descriptors in the order below. */
spin_lock(&dev_priv->uncore.lock);
intel_uncore_forcewake_get__locked(dev_priv, FORCEWAKE_ALL);
- I915_WRITE_FW(RING_ELSP(ring), upper_32_bits(desc[1]));
- I915_WRITE_FW(RING_ELSP(ring), lower_32_bits(desc[1]));
+ I915_WRITE_FW(RING_ELSP(engine), upper_32_bits(desc[1]));
+ I915_WRITE_FW(RING_ELSP(engine), lower_32_bits(desc[1]));

- I915_WRITE_FW(RING_ELSP(ring), upper_32_bits(desc[0]));
+ I915_WRITE_FW(RING_ELSP(engine), upper_32_bits(desc[0]));
/* The context is automatically loaded after the following */
- I915_WRITE_FW(RING_ELSP(ring), lower_32_bits(desc[0]));
+ I915_WRITE_FW(RING_ELSP(engine), lower_32_bits(desc[0]));

/* ELSP is a wo register, use another nearby reg for posting */
- POSTING_READ_FW(RING_EXECLIST_STATUS_LO(ring));
+ POSTING_READ_FW(RING_EXECLIST_STATUS_LO(engine));
intel_uncore_forcewake_put__locked(dev_priv, FORCEWAKE_ALL);
spin_unlock(&dev_priv->uncore.lock);
}

static int execlists_update_context(struct drm_i915_gem_request *rq)
{
- struct intel_engine_cs *ring = rq->ring;
struct i915_hw_ppgtt *ppgtt = rq->ctx->ppgtt;
- struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[ring->id].state;
+ struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[rq->engine->id].state;
struct drm_i915_gem_object *rb_obj = rq->ringbuf->obj;
struct page *page;
uint32_t *reg_state;
@@ -377,7 +374,7 @@ static int execlists_update_context(struct drm_i915_gem_request *rq)
reg_state[CTX_RING_TAIL+1] = rq->tail;
reg_state[CTX_RING_BUFFER_START+1] = i915_gem_obj_ggtt_offset(rb_obj);

- if (ppgtt && !USES_FULL_48BIT_PPGTT(ppgtt->base.dev)) {
+ if (ppgtt && !USES_FULL_48BIT_PPGTT(rq->i915)) {
/* True 32b PPGTT with dynamic page allocation: update PDP
* registers and point the unallocated PDPs to scratch page.
* PML4 is allocated during ppgtt init, so this is not needed
@@ -582,22 +579,22 @@ void intel_lrc_irq_handler(struct intel_engine_cs *ring)

static int execlists_context_queue(struct drm_i915_gem_request *request)
{
- struct intel_engine_cs *ring = request->ring;
+ struct intel_engine_cs *engine = request->engine;
struct drm_i915_gem_request *cursor;
int num_elements = 0;

i915_gem_request_get(request);

- spin_lock_irq(&ring->execlist_lock);
+ spin_lock_irq(&engine->execlist_lock);

- list_for_each_entry(cursor, &ring->execlist_queue, execlist_link)
+ list_for_each_entry(cursor, &engine->execlist_queue, execlist_link)
if (++num_elements > 2)
break;

if (num_elements > 2) {
struct drm_i915_gem_request *tail_req;

- tail_req = list_last_entry(&ring->execlist_queue,
+ tail_req = list_last_entry(&engine->execlist_queue,
struct drm_i915_gem_request,
execlist_link);

@@ -606,41 +603,41 @@ static int execlists_context_queue(struct drm_i915_gem_request *request)
"More than 2 already-submitted reqs queued\n");
list_del(&tail_req->execlist_link);
list_add_tail(&tail_req->execlist_link,
- &ring->execlist_retired_req_list);
+ &engine->execlist_retired_req_list);
}
}

- list_add_tail(&request->execlist_link, &ring->execlist_queue);
+ list_add_tail(&request->execlist_link, &engine->execlist_queue);
if (num_elements == 0)
- execlists_context_unqueue(ring);
+ execlists_context_unqueue(engine);

- spin_unlock_irq(&ring->execlist_lock);
+ spin_unlock_irq(&engine->execlist_lock);

return 0;
}

static int logical_ring_invalidate_all_caches(struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_engine_cs *engine = req->engine;
uint32_t flush_domains;
int ret;

flush_domains = 0;
- if (ring->gpu_caches_dirty)
+ if (engine->gpu_caches_dirty)
flush_domains = I915_GEM_GPU_DOMAINS;

- ret = ring->emit_flush(req, I915_GEM_GPU_DOMAINS, flush_domains);
+ ret = engine->emit_flush(req, I915_GEM_GPU_DOMAINS, flush_domains);
if (ret)
return ret;

- ring->gpu_caches_dirty = false;
+ engine->gpu_caches_dirty = false;
return 0;
}

static int execlists_move_to_gpu(struct drm_i915_gem_request *req,
struct list_head *vmas)
{
- const unsigned other_rings = ~intel_ring_flag(req->ring);
+ const unsigned other_rings = ~intel_ring_flag(req->engine);
struct i915_vma *vma;
uint32_t flush_domains = 0;
bool flush_chipset = false;
@@ -650,7 +647,7 @@ static int execlists_move_to_gpu(struct drm_i915_gem_request *req,
struct drm_i915_gem_object *obj = vma->obj;

if (obj->active & other_rings) {
- ret = i915_gem_object_sync(obj, req->ring, &req);
+ ret = i915_gem_object_sync(obj, req->engine, &req);
if (ret)
return ret;
}
@@ -674,9 +671,9 @@ int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request
{
int ret;

- request->ringbuf = request->ctx->engine[request->ring->id].ringbuf;
+ request->ringbuf = request->ctx->engine[request->engine->id].ringbuf;

- if (request->ctx != request->ring->default_context) {
+ if (request->ctx != request->engine->default_context) {
ret = intel_lr_context_pin(request);
if (ret)
return ret;
@@ -865,17 +862,17 @@ void intel_logical_ring_stop(struct intel_engine_cs *ring)

int logical_ring_flush_all_caches(struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_engine_cs *engine = req->engine;
int ret;

- if (!ring->gpu_caches_dirty)
+ if (!engine->gpu_caches_dirty)
return 0;

- ret = ring->emit_flush(req, 0, I915_GEM_GPU_DOMAINS);
+ ret = engine->emit_flush(req, 0, I915_GEM_GPU_DOMAINS);
if (ret)
return ret;

- ring->gpu_caches_dirty = false;
+ engine->gpu_caches_dirty = false;
return 0;
}

@@ -913,34 +910,33 @@ unpin_ctx_obj:

static int intel_lr_context_pin(struct drm_i915_gem_request *rq)
{
- int ret = 0;
- struct intel_engine_cs *ring = rq->ring;
- struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[ring->id].state;
- struct intel_ringbuffer *ringbuf = rq->ringbuf;
+ int engine = rq->engine->id;
+ int ret;

- if (rq->ctx->engine[ring->id].pin_count++ == 0) {
- ret = intel_lr_context_do_pin(ring, ctx_obj, ringbuf);
- if (ret)
- goto reset_pin_count;
+ if (rq->ctx->engine[engine].pin_count++)
+ return 0;

- i915_gem_context_reference(rq->ctx);
+ ret = intel_lr_context_do_pin(rq->engine,
+ rq->ctx->engine[engine].state,
+ rq->ringbuf);
+ if (ret) {
+ rq->ctx->engine[engine].pin_count = 0;
+ return ret;
}
- return ret;

-reset_pin_count:
- rq->ctx->engine[ring->id].pin_count = 0;
- return ret;
+ i915_gem_context_reference(rq->ctx);
+ return 0;
}

void intel_lr_context_unpin(struct drm_i915_gem_request *rq)
{
- struct intel_engine_cs *ring = rq->ring;
- struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[ring->id].state;
+ int engine = rq->engine->id;
+ struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[engine].state;
struct intel_ringbuffer *ringbuf = rq->ringbuf;

if (ctx_obj) {
- WARN_ON(!mutex_is_locked(&ring->dev->struct_mutex));
- if (--rq->ctx->engine[ring->id].pin_count == 0) {
+ WARN_ON(!mutex_is_locked(&rq->i915->dev->struct_mutex));
+ if (--rq->ctx->engine[engine].pin_count == 0) {
intel_unpin_ringbuffer_obj(ringbuf);
i915_gem_object_ggtt_unpin(ctx_obj);
i915_gem_context_unreference(rq->ctx);
@@ -951,7 +947,7 @@ void intel_lr_context_unpin(struct drm_i915_gem_request *rq)
static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
{
int ret, i;
- struct intel_engine_cs *ring = req->ring;
+ struct intel_engine_cs *engine = req->engine;
struct intel_ringbuffer *ringbuf = req->ringbuf;
struct drm_i915_private *dev_priv = req->i915;
struct i915_workarounds *w = &dev_priv->workarounds;
@@ -959,7 +955,7 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
if (w->count == 0)
return 0;

- ring->gpu_caches_dirty = true;
+ engine->gpu_caches_dirty = true;
ret = logical_ring_flush_all_caches(req);
if (ret)
return ret;
@@ -977,7 +973,7 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)

intel_ring_advance(ringbuf);

- ring->gpu_caches_dirty = true;
+ engine->gpu_caches_dirty = true;
ret = logical_ring_flush_all_caches(req);
if (ret)
return ret;
@@ -1421,7 +1417,7 @@ static int gen9_init_render_ring(struct intel_engine_cs *ring)
static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
{
struct i915_hw_ppgtt *ppgtt = req->ctx->ppgtt;
- struct intel_engine_cs *ring = req->ring;
+ struct intel_engine_cs *engine = req->engine;
struct intel_ringbuffer *ringbuf = req->ringbuf;
const int num_lri_cmds = GEN8_LEGACY_PDPES * 2;
int i, ret;
@@ -1434,9 +1430,9 @@ static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
for (i = GEN8_LEGACY_PDPES - 1; i >= 0; i--) {
const dma_addr_t pd_daddr = i915_page_dir_dma_addr(ppgtt, i);

- intel_ring_emit_reg(ringbuf, GEN8_RING_PDP_UDW(ring, i));
+ intel_ring_emit_reg(ringbuf, GEN8_RING_PDP_UDW(engine, i));
intel_ring_emit(ringbuf, upper_32_bits(pd_daddr));
- intel_ring_emit_reg(ringbuf, GEN8_RING_PDP_LDW(ring, i));
+ intel_ring_emit_reg(ringbuf, GEN8_RING_PDP_LDW(engine, i));
intel_ring_emit(ringbuf, lower_32_bits(pd_daddr));
}

@@ -1460,7 +1456,7 @@ static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
* not idle). PML4 is allocated during ppgtt init so this is
* not needed in 48-bit.*/
if (req->ctx->ppgtt &&
- (intel_ring_flag(req->ring) & req->ctx->ppgtt->pd_dirty_rings)) {
+ (intel_ring_flag(req->engine) & req->ctx->ppgtt->pd_dirty_rings)) {
if (!USES_FULL_48BIT_PPGTT(req->i915) &&
!intel_vgpu_active(req->i915->dev)) {
ret = intel_logical_ring_emit_pdps(req);
@@ -1468,7 +1464,7 @@ static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
return ret;
}

- req->ctx->ppgtt->pd_dirty_rings &= ~intel_ring_flag(req->ring);
+ req->ctx->ppgtt->pd_dirty_rings &= ~intel_ring_flag(req->engine);
}

ret = intel_ring_begin(req, 4);
@@ -1672,21 +1668,21 @@ static int intel_lr_context_render_state_init(struct drm_i915_gem_request *req)
struct render_state so;
int ret;

- ret = i915_gem_render_state_prepare(req->ring, &so);
+ ret = i915_gem_render_state_prepare(req->engine, &so);
if (ret)
return ret;

if (so.rodata == NULL)
return 0;

- ret = req->ring->emit_bb_start(req, so.ggtt_offset,
- I915_DISPATCH_SECURE);
+ ret = req->engine->emit_bb_start(req, so.ggtt_offset,
+ I915_DISPATCH_SECURE);
if (ret)
goto out;

- ret = req->ring->emit_bb_start(req,
- (so.ggtt_offset + so.aux_batch_offset),
- I915_DISPATCH_SECURE);
+ ret = req->engine->emit_bb_start(req,
+ (so.ggtt_offset + so.aux_batch_offset),
+ I915_DISPATCH_SECURE);
if (ret)
goto out;

diff --git a/drivers/gpu/drm/i915/intel_mocs.c b/drivers/gpu/drm/i915/intel_mocs.c
index 5d4f6f3b67cd..40041bebc3dc 100644
--- a/drivers/gpu/drm/i915/intel_mocs.c
+++ b/drivers/gpu/drm/i915/intel_mocs.c
@@ -138,21 +138,21 @@ static const struct drm_i915_mocs_entry broxton_mocs_table[] = {
*
* Return: true if there are applicable MOCS settings for the device.
*/
-static bool get_mocs_settings(struct drm_device *dev,
+static bool get_mocs_settings(struct drm_i915_private *dev_priv,
struct drm_i915_mocs_table *table)
{
bool result = false;

- if (IS_SKYLAKE(dev) || IS_KABYLAKE(dev)) {
+ if (IS_SKYLAKE(dev_priv) || IS_KABYLAKE(dev_priv)) {
table->size = ARRAY_SIZE(skylake_mocs_table);
table->table = skylake_mocs_table;
result = true;
- } else if (IS_BROXTON(dev)) {
+ } else if (IS_BROXTON(dev_priv)) {
table->size = ARRAY_SIZE(broxton_mocs_table);
table->table = broxton_mocs_table;
result = true;
} else {
- WARN_ONCE(INTEL_INFO(dev)->gen >= 9,
+ WARN_ONCE(INTEL_INFO(dev_priv)->gen >= 9,
"Platform that should have a MOCS table does not.\n");
}

@@ -316,13 +316,12 @@ int intel_rcs_context_init_mocs(struct drm_i915_gem_request *req)
struct drm_i915_mocs_table t;
int ret;

- if (get_mocs_settings(req->ring->dev, &t)) {
- struct drm_i915_private *dev_priv = req->i915;
+ if (get_mocs_settings(req->i915, &t)) {
struct intel_engine_cs *ring;
enum intel_ring_id ring_id;

/* Program the control registers */
- for_each_ring(ring, dev_priv, ring_id) {
+ for_each_ring(ring, req->i915, ring_id) {
ret = emit_mocs_control_table(req, &t, ring_id);
if (ret)
return ret;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index db5c407f7720..072fd0fc7748 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -192,7 +192,7 @@ static int
intel_emit_post_sync_nonzero_flush(struct drm_i915_gem_request *req)
{
struct intel_ringbuffer *ring = req->ringbuf;
- u32 scratch_addr = req->ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
+ u32 scratch_addr = req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
int ret;

ret = intel_ring_begin(req, 6);
@@ -229,7 +229,7 @@ gen6_render_ring_flush(struct drm_i915_gem_request *req,
{
struct intel_ringbuffer *ring = req->ringbuf;
u32 flags = 0;
- u32 scratch_addr = req->ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
+ u32 scratch_addr = req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
int ret;

/* Force SNB workarounds for PIPE_CONTROL flushes */
@@ -302,7 +302,7 @@ gen7_render_ring_flush(struct drm_i915_gem_request *req,
{
struct intel_ringbuffer *ring = req->ringbuf;
u32 flags = 0;
- u32 scratch_addr = req->ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
+ u32 scratch_addr = req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
int ret;

/*
@@ -386,7 +386,7 @@ gen8_render_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains, u32 flush_domains)
{
u32 flags = 0;
- u32 scratch_addr = req->ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
+ u32 scratch_addr = req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
int ret;

flags |= PIPE_CONTROL_CS_STALL;
@@ -696,7 +696,7 @@ static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
if (w->count == 0)
return 0;

- req->ring->gpu_caches_dirty = true;
+ req->engine->gpu_caches_dirty = true;
ret = intel_ring_flush_all_caches(req);
if (ret)
return ret;
@@ -714,7 +714,7 @@ static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)

intel_ring_advance(ring);

- req->ring->gpu_caches_dirty = true;
+ req->engine->gpu_caches_dirty = true;
ret = intel_ring_flush_all_caches(req);
if (ret)
return ret;
@@ -1205,7 +1205,7 @@ static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
return ret;

for_each_ring(waiter, dev_priv, i) {
- u64 gtt_offset = signaller_req->ring->semaphore.signal_ggtt[i];
+ u64 gtt_offset = signaller_req->engine->semaphore.signal_ggtt[i];
if (gtt_offset == MI_SEMAPHORE_SYNC_INVALID)
continue;

@@ -1243,7 +1243,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
return ret;

for_each_ring(waiter, dev_priv, i) {
- u64 gtt_offset = signaller_req->ring->semaphore.signal_ggtt[i];
+ u64 gtt_offset = signaller_req->engine->semaphore.signal_ggtt[i];
if (gtt_offset == MI_SEMAPHORE_SYNC_INVALID)
continue;

@@ -1279,7 +1279,7 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
return ret;

for_each_ring(useless, dev_priv, i) {
- i915_reg_t mbox_reg = signaller_req->ring->semaphore.mbox.signal[i];
+ i915_reg_t mbox_reg = signaller_req->engine->semaphore.mbox.signal[i];

if (i915_mmio_reg_valid(mbox_reg)) {
intel_ring_emit(signaller, MI_LOAD_REGISTER_IMM(1));
@@ -1309,8 +1309,8 @@ gen6_add_request(struct drm_i915_gem_request *req)
struct intel_ringbuffer *ring = req->ringbuf;
int ret;

- if (req->ring->semaphore.signal)
- ret = req->ring->semaphore.signal(req, 4);
+ if (req->engine->semaphore.signal)
+ ret = req->engine->semaphore.signal(req, 4);
else
ret = intel_ring_begin(req, 4);

@@ -1321,7 +1321,7 @@ gen6_add_request(struct drm_i915_gem_request *req)
intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
intel_ring_emit(ring, req->fence.seqno);
intel_ring_emit(ring, MI_USER_INTERRUPT);
- __intel_ring_advance(req->ring);
+ __intel_ring_advance(req->engine);

return 0;
}
@@ -1359,10 +1359,10 @@ gen8_ring_sync(struct drm_i915_gem_request *waiter_req,
MI_SEMAPHORE_SAD_GTE_SDD);
intel_ring_emit(waiter, seqno);
intel_ring_emit(waiter,
- lower_32_bits(GEN8_WAIT_OFFSET(waiter_req->ring,
+ lower_32_bits(GEN8_WAIT_OFFSET(waiter_req->engine,
signaller->id)));
intel_ring_emit(waiter,
- upper_32_bits(GEN8_WAIT_OFFSET(waiter_req->ring,
+ upper_32_bits(GEN8_WAIT_OFFSET(waiter_req->engine,
signaller->id)));
intel_ring_advance(waiter);
return 0;
@@ -1377,7 +1377,7 @@ gen6_ring_sync(struct drm_i915_gem_request *waiter_req,
u32 dw1 = MI_SEMAPHORE_MBOX |
MI_SEMAPHORE_COMPARE |
MI_SEMAPHORE_REGISTER;
- u32 wait_mbox = signaller->semaphore.mbox.wait[waiter_req->ring->id];
+ u32 wait_mbox = signaller->semaphore.mbox.wait[waiter_req->engine->id];
int ret;

/* Throughout all of the GEM code, seqno passed implies our current
@@ -1422,7 +1422,7 @@ static int
pc_render_add_request(struct drm_i915_gem_request *req)
{
struct intel_ringbuffer *ring = req->ringbuf;
- u32 addr = req->ring->status_page.gfx_addr +
+ u32 addr = req->engine->status_page.gfx_addr +
(I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
u32 scratch_addr = addr;
int ret;
@@ -1465,7 +1465,7 @@ pc_render_add_request(struct drm_i915_gem_request *req)
intel_ring_emit(ring, addr | PIPE_CONTROL_GLOBAL_GTT);
intel_ring_emit(ring, req->fence.seqno);
intel_ring_emit(ring, 0);
- __intel_ring_advance(req->ring);
+ __intel_ring_advance(req->engine);

return 0;
}
@@ -1575,7 +1575,7 @@ i9xx_add_request(struct drm_i915_gem_request *req)
intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
intel_ring_emit(ring, req->fence.seqno);
intel_ring_emit(ring, MI_USER_INTERRUPT);
- __intel_ring_advance(req->ring);
+ __intel_ring_advance(req->engine);

return 0;
}
@@ -1686,7 +1686,7 @@ i830_dispatch_execbuffer(struct drm_i915_gem_request *req,
unsigned dispatch_flags)
{
struct intel_ringbuffer *ring = req->ringbuf;
- u32 cs_offset = req->ring->scratch.gtt_offset;
+ u32 cs_offset = req->engine->scratch.gtt_offset;
int ret;

ret = intel_ring_begin(req, 6);
@@ -2082,7 +2082,7 @@ int intel_ring_idle(struct intel_engine_cs *ring)

int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request)
{
- request->ringbuf = request->ring->buffer;
+ request->ringbuf = request->engine->buffer;
return 0;
}

@@ -2136,7 +2136,7 @@ void intel_ring_reserved_space_end(struct intel_ringbuffer *ringbuf)
static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
{
struct intel_ringbuffer *ringbuf = req->ringbuf;
- struct intel_engine_cs *ring = req->ring;
+ struct intel_engine_cs *engine = req->engine;
struct drm_i915_gem_request *target;
unsigned space;
int ret;
@@ -2147,7 +2147,7 @@ static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
/* The whole point of reserving space is to not wait! */
WARN_ON(ringbuf->reserved_in_use);

- list_for_each_entry(target, &ring->request_list, list) {
+ list_for_each_entry(target, &engine->request_list, list) {
/*
* The request queue is per-engine, so can contain requests
* from multiple ringbuffers. Here, we must ignore any that
@@ -2163,7 +2163,7 @@ static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
break;
}

- if (WARN_ON(&target->list == &ring->request_list))
+ if (WARN_ON(&target->list == &engine->request_list))
return -ENOSPC;

ret = i915_wait_request(target);
@@ -2836,40 +2836,40 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
int
intel_ring_flush_all_caches(struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_engine_cs *engine = req->engine;
int ret;

- if (!ring->gpu_caches_dirty)
+ if (!engine->gpu_caches_dirty)
return 0;

- ret = ring->flush(req, 0, I915_GEM_GPU_DOMAINS);
+ ret = engine->flush(req, 0, I915_GEM_GPU_DOMAINS);
if (ret)
return ret;

trace_i915_gem_ring_flush(req, 0, I915_GEM_GPU_DOMAINS);

- ring->gpu_caches_dirty = false;
+ engine->gpu_caches_dirty = false;
return 0;
}

int
intel_ring_invalidate_all_caches(struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_engine_cs *engine = req->engine;
uint32_t flush_domains;
int ret;

flush_domains = 0;
- if (ring->gpu_caches_dirty)
+ if (engine->gpu_caches_dirty)
flush_domains = I915_GEM_GPU_DOMAINS;

- ret = ring->flush(req, I915_GEM_GPU_DOMAINS, flush_domains);
+ ret = engine->flush(req, I915_GEM_GPU_DOMAINS, flush_domains);
if (ret)
return ret;

trace_i915_gem_ring_flush(req, I915_GEM_GPU_DOMAINS, flush_domains);

- ring->gpu_caches_dirty = false;
+ engine->gpu_caches_dirty = false;
return 0;
}
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:56 UTC
Permalink
If we move the release of the GEM request (i.e. decoupling it from the
various lists used for client and context tracking) after it is complete
(either by the GPU retiring the request, or by the caller cancelling the
request), we can remove the requirement that the final unreference of
the GEM request need to be under the struct_mutex.

v2: Execlists as always is badly asymetric and year old patches still
haven't landed to fix it up.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem.c | 4 +--
drivers/gpu/drm/i915/i915_gem_request.c | 50 ++++++++++++++------------------
drivers/gpu/drm/i915/i915_gem_request.h | 14 ---------
drivers/gpu/drm/i915/intel_breadcrumbs.c | 2 +-
drivers/gpu/drm/i915/intel_display.c | 2 +-
drivers/gpu/drm/i915/intel_lrc.c | 6 ++--
drivers/gpu/drm/i915/intel_pm.c | 2 +-
7 files changed, 30 insertions(+), 50 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 68a25617ca7a..6d8d65304abf 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2502,7 +2502,7 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
ret = __i915_wait_request(req[i], true,
args->timeout_ns > 0 ? &args->timeout_ns : NULL,
to_rps_client(file));
- i915_gem_request_unreference__unlocked(req[i]);
+ i915_gem_request_unreference(req[i]);
}
return ret;

@@ -3505,7 +3505,7 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
return 0;

ret = __i915_wait_request(target, true, NULL, NULL);
- i915_gem_request_unreference__unlocked(target);
+ i915_gem_request_unreference(target);

return ret;
}
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index b4ede6dd7b20..1c4f4d83a3c2 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -184,13 +184,6 @@ err:
return ret;
}

-void i915_gem_request_cancel(struct drm_i915_gem_request *req)
-{
- intel_ring_reserved_space_cancel(req->ringbuf);
-
- i915_gem_request_unreference(req);
-}
-
int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
struct drm_file *file)
{
@@ -235,9 +228,28 @@ i915_gem_request_remove_from_client(struct drm_i915_gem_request *request)
request->pid = NULL;
}

+static void __i915_gem_request_release(struct drm_i915_gem_request *request)
+{
+ i915_gem_request_remove_from_client(request);
+
+ i915_gem_context_unreference(request->ctx);
+ i915_gem_request_unreference(request);
+}
+
+void i915_gem_request_cancel(struct drm_i915_gem_request *req)
+{
+ intel_ring_reserved_space_cancel(req->ringbuf);
+ if (i915.enable_execlists) {
+ if (req->ctx != req->ring->default_context)
+ intel_lr_context_unpin(req);
+ }
+ __i915_gem_request_release(req);
+}
+
static void i915_gem_request_retire(struct drm_i915_gem_request *request)
{
trace_i915_gem_request_retire(request);
+ list_del_init(&request->list);

/* We know the GPU must have read the request to have
* sent us the seqno + interrupt, so use the position
@@ -248,11 +260,7 @@ static void i915_gem_request_retire(struct drm_i915_gem_request *request)
* completion order.
*/
request->ringbuf->last_retired_head = request->postfix;
-
- list_del_init(&request->list);
- i915_gem_request_remove_from_client(request);
-
- i915_gem_request_unreference(request);
+ __i915_gem_request_release(request);
}

void
@@ -639,21 +647,7 @@ i915_wait_request(struct drm_i915_gem_request *req)

void i915_gem_request_free(struct kref *req_ref)
{
- struct drm_i915_gem_request *req = container_of(req_ref,
- typeof(*req), ref);
- struct intel_context *ctx = req->ctx;
-
- if (req->file_priv)
- i915_gem_request_remove_from_client(req);
-
- if (ctx) {
- if (i915.enable_execlists) {
- if (ctx != req->ring->default_context)
- intel_lr_context_unpin(req);
- }
-
- i915_gem_context_unreference(ctx);
- }
-
+ struct drm_i915_gem_request *req =
+ container_of(req_ref, typeof(*req), ref);
kmem_cache_free(req->i915->requests, req);
}
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index d46f22f30b0a..af1b825fce50 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -154,23 +154,9 @@ i915_gem_request_reference(struct drm_i915_gem_request *req)
static inline void
i915_gem_request_unreference(struct drm_i915_gem_request *req)
{
- WARN_ON(!mutex_is_locked(&req->ring->dev->struct_mutex));
kref_put(&req->ref, i915_gem_request_free);
}

-static inline void
-i915_gem_request_unreference__unlocked(struct drm_i915_gem_request *req)
-{
- struct drm_device *dev;
-
- if (!req)
- return;
-
- dev = req->ring->dev;
- if (kref_put_mutex(&req->ref, i915_gem_request_free, &dev->struct_mutex))
- mutex_unlock(&dev->struct_mutex);
-}
-
static inline void i915_gem_request_assign(struct drm_i915_gem_request **pdst,
struct drm_i915_gem_request *src)
{
diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
index 0ea01bd6811c..f6731aac7fcf 100644
--- a/drivers/gpu/drm/i915/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
@@ -390,7 +390,7 @@ static int intel_breadcrumbs_signaler(void *arg)
*/
intel_engine_remove_wait(engine, &signal->wait);

- i915_gem_request_unreference__unlocked(signal->request);
+ i915_gem_request_unreference(signal->request);

/* Find the next oldest signal. Note that as we have
* not been holding the lock, another client may
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 57c54c9bc82b..32885b8d5c02 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11431,7 +11431,7 @@ static void intel_mmio_flip_work_func(struct work_struct *work)
WARN_ON(__i915_wait_request(mmio_flip->req,
false, NULL,
&mmio_flip->i915->rps.mmioflips));
- i915_gem_request_unreference__unlocked(mmio_flip->req);
+ i915_gem_request_unreference(mmio_flip->req);
}

/* For framebuffer backed by dmabuf, wait for fence */
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index b634e7d7a92b..7a3069a2beb2 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -587,9 +587,6 @@ static int execlists_context_queue(struct drm_i915_gem_request *request)
struct drm_i915_gem_request *cursor;
int num_elements = 0;

- if (request->ctx != ring->default_context)
- intel_lr_context_pin(request);
-
i915_gem_request_reference(request);

spin_lock_irq(&ring->execlist_lock);
@@ -1071,6 +1068,8 @@ static int intel_lr_context_pin(struct drm_i915_gem_request *rq)
ret = intel_lr_context_do_pin(ring, ctx_obj, ringbuf);
if (ret)
goto reset_pin_count;
+
+ i915_gem_context_reference(rq->ctx);
}
return ret;

@@ -1090,6 +1089,7 @@ void intel_lr_context_unpin(struct drm_i915_gem_request *rq)
if (--rq->ctx->engine[ring->id].pin_count == 0) {
intel_unpin_ringbuffer_obj(ringbuf);
i915_gem_object_ggtt_unpin(ctx_obj);
+ i915_gem_context_unreference(rq->ctx);
}
}
}
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index e51ba529a97e..0e13135aefaa 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -7289,7 +7289,7 @@ static void __intel_rps_boost_work(struct work_struct *work)
gen6_rps_boost(to_i915(req->ring->dev), NULL,
req->emitted_jiffies);

- i915_gem_request_unreference__unlocked(req);
+ i915_gem_request_unreference(req);
kfree(boost);
}
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:59 UTC
Permalink
We want to restrict waitboosting to known process contexts, where we can
track which clients are receiving waitboosts and prevent excessive power
wasting. For fence_wait() we do not have any client tracking and so that
leaves it open to abuse.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem_request.c | 6 +++---
drivers/gpu/drm/i915/i915_gem_request.h | 1 +
2 files changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index a796dbd1b0e4..01893d847dfd 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -68,7 +68,7 @@ static signed long i915_fence_wait(struct fence *fence,

ret = __i915_wait_request(to_i915_request(fence),
interruptible, timeout,
- NULL);
+ NO_WAITBOOST);
if (ret == -ETIME)
return 0;

@@ -621,7 +621,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
* forcing the clocks too high for the whole system, we only allow
* each client to waitboost once in a busy period.
*/
- if (INTEL_INFO(req->i915)->gen >= 6)
+ if (!IS_ERR(rps) && INTEL_INFO(req->i915)->gen >= 6)
gen6_rps_boost(req->i915, rps, req->emitted_jiffies);

intel_wait_init(&wait, req->fence.seqno);
@@ -691,7 +691,7 @@ complete:
*timeout = 0;
}

- if (ret == 0 && rps &&
+ if (ret == 0 && !IS_ERR_OR_NULL(rps) &&
req->fence.seqno == req->ring->last_submitted_seqno) {
/* The GPU is now idle and this client has stalled.
* Since no other client has submitted a request in the
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index 0ab14fd0fce0..6b3de827929a 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -179,6 +179,7 @@ void __i915_add_request(struct drm_i915_gem_request *req,
__i915_add_request(req, NULL, false)

struct intel_rps_client;
+#define NO_WAITBOOST ERR_PTR(-1)

int __i915_wait_request(struct drm_i915_gem_request *req,
bool interruptible,
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:06 UTC
Permalink
Both perform the same actions with more or less indirection, so just
unify the code.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem.c | 2 +-
drivers/gpu/drm/i915/i915_gem_context.c | 8 +-
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 34 ++++-----
drivers/gpu/drm/i915/i915_gem_gtt.c | 26 +++----
drivers/gpu/drm/i915/intel_display.c | 26 +++----
drivers/gpu/drm/i915/intel_lrc.c | 114 ++++++++++++++---------------
drivers/gpu/drm/i915/intel_lrc.h | 26 -------
drivers/gpu/drm/i915/intel_mocs.c | 30 ++++----
drivers/gpu/drm/i915/intel_overlay.c | 42 +++++------
drivers/gpu/drm/i915/intel_ringbuffer.c | 101 ++++++++++++-------------
drivers/gpu/drm/i915/intel_ringbuffer.h | 21 ++----
11 files changed, 194 insertions(+), 236 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index c2a1ec8abc11..247731672cb1 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4068,7 +4068,7 @@ err:

int i915_gem_l3_remap(struct drm_i915_gem_request *req, int slice)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
struct drm_i915_private *dev_priv = req->i915;
u32 *remap_info = dev_priv->l3_parity.remap_info[slice];
int i, ret;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 3e3b4bf3fed1..d58de7e084dc 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -519,7 +519,7 @@ i915_gem_context_get(struct drm_i915_file_private *file_priv, u32 id)
static inline int
mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
u32 flags = hw_flags | MI_MM_SPACE_GTT;
const int num_rings =
/* Use an extended w/a on ivb+ if signalling from other rings */
@@ -534,7 +534,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
* itlb_before_ctx_switch.
*/
if (IS_GEN6(req->i915)) {
- ret = ring->flush(req, I915_GEM_GPU_DOMAINS, 0);
+ ret = req->ring->flush(req, I915_GEM_GPU_DOMAINS, 0);
if (ret)
return ret;
}
@@ -562,7 +562,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)

intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(num_rings));
for_each_ring(signaller, req->i915, i) {
- if (signaller == ring)
+ if (signaller == req->ring)
continue;

intel_ring_emit_reg(ring, RING_PSMI_CTL(signaller->mmio_base));
@@ -587,7 +587,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)

intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(num_rings));
for_each_ring(signaller, req->i915, i) {
- if (signaller == ring)
+ if (signaller == req->ring)
continue;

intel_ring_emit_reg(ring, RING_PSMI_CTL(signaller->mmio_base));
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 78b462956c78..603a247ac333 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1146,14 +1146,12 @@ i915_gem_execbuffer_retire_commands(struct i915_execbuffer_params *params)
}

static int
-i915_reset_gen7_sol_offsets(struct drm_device *dev,
- struct drm_i915_gem_request *req)
+i915_reset_gen7_sol_offsets(struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *ring = req->ring;
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_ringbuffer *ring = req->ringbuf;
int ret, i;

- if (!IS_GEN7(dev) || ring != &dev_priv->ring[RCS]) {
+ if (!IS_GEN7(req->i915) || req->ring->id != RCS) {
DRM_DEBUG("sol reset is gen7/rcs only\n");
return -EINVAL;
}
@@ -1231,9 +1229,8 @@ i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
struct drm_i915_gem_execbuffer2 *args,
struct list_head *vmas)
{
- struct drm_device *dev = params->dev;
- struct intel_engine_cs *ring = params->ring;
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_ringbuffer *ring = params->request->ringbuf;
+ struct drm_i915_private *dev_priv = params->request->i915;
u64 exec_start, exec_len;
int instp_mode;
u32 instp_mask;
@@ -1247,34 +1244,31 @@ i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
if (ret)
return ret;

- WARN(params->ctx->ppgtt && params->ctx->ppgtt->pd_dirty_rings & (1<<ring->id),
- "%s didn't clear reload\n", ring->name);
-
instp_mode = args->flags & I915_EXEC_CONSTANTS_MASK;
instp_mask = I915_EXEC_CONSTANTS_MASK;
switch (instp_mode) {
case I915_EXEC_CONSTANTS_REL_GENERAL:
case I915_EXEC_CONSTANTS_ABSOLUTE:
case I915_EXEC_CONSTANTS_REL_SURFACE:
- if (instp_mode != 0 && ring != &dev_priv->ring[RCS]) {
+ if (instp_mode != 0 && params->ring->id != RCS) {
DRM_DEBUG("non-0 rel constants mode on non-RCS\n");
return -EINVAL;
}

if (instp_mode != dev_priv->relative_constants_mode) {
- if (INTEL_INFO(dev)->gen < 4) {
+ if (INTEL_INFO(dev_priv)->gen < 4) {
DRM_DEBUG("no rel constants on pre-gen4\n");
return -EINVAL;
}

- if (INTEL_INFO(dev)->gen > 5 &&
+ if (INTEL_INFO(dev_priv)->gen > 5 &&
instp_mode == I915_EXEC_CONSTANTS_REL_SURFACE) {
DRM_DEBUG("rel surface constants mode invalid on gen5+\n");
return -EINVAL;
}

/* The HW changed the meaning on this bit on gen6 */
- if (INTEL_INFO(dev)->gen >= 6)
+ if (INTEL_INFO(dev_priv)->gen >= 6)
instp_mask &= ~I915_EXEC_CONSTANTS_REL_SURFACE;
}
break;
@@ -1283,7 +1277,7 @@ i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
return -EINVAL;
}

- if (ring == &dev_priv->ring[RCS] &&
+ if (params->ring->id == RCS &&
instp_mode != dev_priv->relative_constants_mode) {
ret = intel_ring_begin(params->request, 4);
if (ret)
@@ -1299,7 +1293,7 @@ i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
}

if (args->flags & I915_EXEC_GEN7_SOL_RESET) {
- ret = i915_reset_gen7_sol_offsets(dev, params->request);
+ ret = i915_reset_gen7_sol_offsets(params->request);
if (ret)
return ret;
}
@@ -1308,9 +1302,9 @@ i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
exec_start = params->batch_obj_vm_offset +
params->args_batch_start_offset;

- ret = ring->dispatch_execbuffer(params->request,
- exec_start, exec_len,
- params->dispatch_flags);
+ ret = params->ring->dispatch_execbuffer(params->request,
+ exec_start, exec_len,
+ params->dispatch_flags);
if (ret)
return ret;

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 224fe89baca3..98841b05f764 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -656,7 +656,7 @@ static int gen8_write_pdp(struct drm_i915_gem_request *req,
unsigned entry,
dma_addr_t addr)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
int ret;

BUG_ON(entry >= 4);
@@ -666,10 +666,10 @@ static int gen8_write_pdp(struct drm_i915_gem_request *req,
return ret;

intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
- intel_ring_emit_reg(ring, GEN8_RING_PDP_UDW(ring, entry));
+ intel_ring_emit_reg(ring, GEN8_RING_PDP_UDW(req->ring, entry));
intel_ring_emit(ring, upper_32_bits(addr));
intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
- intel_ring_emit_reg(ring, GEN8_RING_PDP_LDW(ring, entry));
+ intel_ring_emit_reg(ring, GEN8_RING_PDP_LDW(req->ring, entry));
intel_ring_emit(ring, lower_32_bits(addr));
intel_ring_advance(ring);

@@ -1648,11 +1648,11 @@ static uint32_t get_pd_offset(struct i915_hw_ppgtt *ppgtt)
static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
int ret;

/* NB: TLBs must be flushed and invalidated before a switch */
- ret = ring->flush(req, I915_GEM_GPU_DOMAINS, I915_GEM_GPU_DOMAINS);
+ ret = req->ring->flush(req, I915_GEM_GPU_DOMAINS, I915_GEM_GPU_DOMAINS);
if (ret)
return ret;

@@ -1661,9 +1661,9 @@ static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
return ret;

intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(2));
- intel_ring_emit_reg(ring, RING_PP_DIR_DCLV(ring));
+ intel_ring_emit_reg(ring, RING_PP_DIR_DCLV(req->ring));
intel_ring_emit(ring, PP_DIR_DCLV_2G);
- intel_ring_emit_reg(ring, RING_PP_DIR_BASE(ring));
+ intel_ring_emit_reg(ring, RING_PP_DIR_BASE(req->ring));
intel_ring_emit(ring, get_pd_offset(ppgtt));
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
@@ -1685,11 +1685,11 @@ static int vgpu_mm_switch(struct i915_hw_ppgtt *ppgtt,
static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
int ret;

/* NB: TLBs must be flushed and invalidated before a switch */
- ret = ring->flush(req, I915_GEM_GPU_DOMAINS, I915_GEM_GPU_DOMAINS);
+ ret = req->ring->flush(req, I915_GEM_GPU_DOMAINS, I915_GEM_GPU_DOMAINS);
if (ret)
return ret;

@@ -1698,16 +1698,16 @@ static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
return ret;

intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(2));
- intel_ring_emit_reg(ring, RING_PP_DIR_DCLV(ring));
+ intel_ring_emit_reg(ring, RING_PP_DIR_DCLV(req->ring));
intel_ring_emit(ring, PP_DIR_DCLV_2G);
- intel_ring_emit_reg(ring, RING_PP_DIR_BASE(ring));
+ intel_ring_emit_reg(ring, RING_PP_DIR_BASE(req->ring));
intel_ring_emit(ring, get_pd_offset(ppgtt));
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);

/* XXX: RCS is the only one to auto invalidate the TLBs? */
- if (ring->id != RCS) {
- ret = ring->flush(req, I915_GEM_GPU_DOMAINS, I915_GEM_GPU_DOMAINS);
+ if (req->ring->id != RCS) {
+ ret = req->ring->flush(req, I915_GEM_GPU_DOMAINS, I915_GEM_GPU_DOMAINS);
if (ret)
return ret;
}
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index e2822530af25..b28e783f6f04 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11052,7 +11052,7 @@ static int intel_gen2_queue_flip(struct drm_device *dev,
struct drm_i915_gem_request *req,
uint32_t flags)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
u32 flip_mask;
int ret;
@@ -11087,7 +11087,7 @@ static int intel_gen3_queue_flip(struct drm_device *dev,
struct drm_i915_gem_request *req,
uint32_t flags)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
u32 flip_mask;
int ret;
@@ -11119,8 +11119,8 @@ static int intel_gen4_queue_flip(struct drm_device *dev,
struct drm_i915_gem_request *req,
uint32_t flags)
{
- struct intel_engine_cs *ring = req->ring;
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_ringbuffer *ring = req->ringbuf;
+ struct drm_i915_private *dev_priv = req->i915;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
uint32_t pf, pipesrc;
int ret;
@@ -11158,8 +11158,8 @@ static int intel_gen6_queue_flip(struct drm_device *dev,
struct drm_i915_gem_request *req,
uint32_t flags)
{
- struct intel_engine_cs *ring = req->ring;
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_ringbuffer *ring = req->ringbuf;
+ struct drm_i915_private *dev_priv = req->i915;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
uint32_t pf, pipesrc;
int ret;
@@ -11194,7 +11194,7 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
struct drm_i915_gem_request *req,
uint32_t flags)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
uint32_t plane_bit = 0;
int len, ret;
@@ -11215,14 +11215,14 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
}

len = 4;
- if (ring->id == RCS) {
+ if (req->ring->id == RCS) {
len += 6;
/*
* On Gen 8, SRM is now taking an extra dword to accommodate
* 48bits addresses, and we need a NOOP for the batch size to
* stay even.
*/
- if (IS_GEN8(dev))
+ if (IS_GEN8(req->i915))
len += 2;
}

@@ -11253,21 +11253,21 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
* for the RCS also doesn't appear to drop events. Setting the DERRMR
* to zero does lead to lockups within MI_DISPLAY_FLIP.
*/
- if (ring->id == RCS) {
+ if (req->ring->id == RCS) {
intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
intel_ring_emit_reg(ring, DERRMR);
intel_ring_emit(ring, ~(DERRMR_PIPEA_PRI_FLIP_DONE |
DERRMR_PIPEB_PRI_FLIP_DONE |
DERRMR_PIPEC_PRI_FLIP_DONE));
- if (IS_GEN8(dev))
+ if (IS_GEN8(req->i915))
intel_ring_emit(ring, MI_STORE_REGISTER_MEM_GEN8 |
MI_SRM_LRM_GLOBAL_GTT);
else
intel_ring_emit(ring, MI_STORE_REGISTER_MEM |
MI_SRM_LRM_GLOBAL_GTT);
intel_ring_emit_reg(ring, DERRMR);
- intel_ring_emit(ring, ring->scratch.gtt_offset + 256);
- if (IS_GEN8(dev)) {
+ intel_ring_emit(ring, req->ring->scratch.gtt_offset + 256);
+ if (IS_GEN8(req->i915)) {
intel_ring_emit(ring, 0);
intel_ring_emit(ring, MI_NOOP);
}
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index a369aa041522..dc4fc9d8612c 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -754,7 +754,7 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
{
struct drm_i915_private *dev_priv = request->i915;

- intel_logical_ring_advance(request->ringbuf);
+ intel_ring_advance(request->ringbuf);
request->tail = request->ringbuf->tail;

if (dev_priv->guc.execbuf_client)
@@ -932,11 +932,11 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
if (ret)
return ret;

- intel_logical_ring_emit(ringbuf, MI_NOOP);
- intel_logical_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(1));
- intel_logical_ring_emit_reg(ringbuf, INSTPM);
- intel_logical_ring_emit(ringbuf, instp_mask << 16 | instp_mode);
- intel_logical_ring_advance(ringbuf);
+ intel_ring_emit(ringbuf, MI_NOOP);
+ intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(1));
+ intel_ring_emit_reg(ringbuf, INSTPM);
+ intel_ring_emit(ringbuf, instp_mask << 16 | instp_mode);
+ intel_ring_advance(ringbuf);

dev_priv->relative_constants_mode = instp_mode;
}
@@ -1108,14 +1108,14 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
if (ret)
return ret;

- intel_logical_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(w->count));
+ intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(w->count));
for (i = 0; i < w->count; i++) {
- intel_logical_ring_emit_reg(ringbuf, w->reg[i].addr);
- intel_logical_ring_emit(ringbuf, w->reg[i].value);
+ intel_ring_emit_reg(ringbuf, w->reg[i].addr);
+ intel_ring_emit(ringbuf, w->reg[i].value);
}
- intel_logical_ring_emit(ringbuf, MI_NOOP);
+ intel_ring_emit(ringbuf, MI_NOOP);

- intel_logical_ring_advance(ringbuf);
+ intel_ring_advance(ringbuf);

ring->gpu_caches_dirty = true;
ret = logical_ring_flush_all_caches(req);
@@ -1570,18 +1570,18 @@ static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
if (ret)
return ret;

- intel_logical_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(num_lri_cmds));
+ intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(num_lri_cmds));
for (i = GEN8_LEGACY_PDPES - 1; i >= 0; i--) {
const dma_addr_t pd_daddr = i915_page_dir_dma_addr(ppgtt, i);

- intel_logical_ring_emit_reg(ringbuf, GEN8_RING_PDP_UDW(ring, i));
- intel_logical_ring_emit(ringbuf, upper_32_bits(pd_daddr));
- intel_logical_ring_emit_reg(ringbuf, GEN8_RING_PDP_LDW(ring, i));
- intel_logical_ring_emit(ringbuf, lower_32_bits(pd_daddr));
+ intel_ring_emit_reg(ringbuf, GEN8_RING_PDP_UDW(ring, i));
+ intel_ring_emit(ringbuf, upper_32_bits(pd_daddr));
+ intel_ring_emit_reg(ringbuf, GEN8_RING_PDP_LDW(ring, i));
+ intel_ring_emit(ringbuf, lower_32_bits(pd_daddr));
}

- intel_logical_ring_emit(ringbuf, MI_NOOP);
- intel_logical_ring_advance(ringbuf);
+ intel_ring_emit(ringbuf, MI_NOOP);
+ intel_ring_advance(ringbuf);

return 0;
}
@@ -1616,14 +1616,14 @@ static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
return ret;

/* FIXME(BDW): Address space and security selectors. */
- intel_logical_ring_emit(ringbuf, MI_BATCH_BUFFER_START_GEN8 |
- (ppgtt<<8) |
- (dispatch_flags & I915_DISPATCH_RS ?
- MI_BATCH_RESOURCE_STREAMER : 0));
- intel_logical_ring_emit(ringbuf, lower_32_bits(offset));
- intel_logical_ring_emit(ringbuf, upper_32_bits(offset));
- intel_logical_ring_emit(ringbuf, MI_NOOP);
- intel_logical_ring_advance(ringbuf);
+ intel_ring_emit(ringbuf, MI_BATCH_BUFFER_START_GEN8 |
+ (ppgtt<<8) |
+ (dispatch_flags & I915_DISPATCH_RS ?
+ MI_BATCH_RESOURCE_STREAMER : 0));
+ intel_ring_emit(ringbuf, lower_32_bits(offset));
+ intel_ring_emit(ringbuf, upper_32_bits(offset));
+ intel_ring_emit(ringbuf, MI_NOOP);
+ intel_ring_advance(ringbuf);

return 0;
}
@@ -1674,13 +1674,13 @@ static int gen8_emit_flush(struct drm_i915_gem_request *request,
cmd |= MI_INVALIDATE_BSD;
}

- intel_logical_ring_emit(ringbuf, cmd);
- intel_logical_ring_emit(ringbuf,
- I915_GEM_HWS_SCRATCH_ADDR |
- MI_FLUSH_DW_USE_GTT);
- intel_logical_ring_emit(ringbuf, 0); /* upper addr */
- intel_logical_ring_emit(ringbuf, 0); /* value */
- intel_logical_ring_advance(ringbuf);
+ intel_ring_emit(ringbuf, cmd);
+ intel_ring_emit(ringbuf,
+ I915_GEM_HWS_SCRATCH_ADDR |
+ MI_FLUSH_DW_USE_GTT);
+ intel_ring_emit(ringbuf, 0); /* upper addr */
+ intel_ring_emit(ringbuf, 0); /* value */
+ intel_ring_advance(ringbuf);

return 0;
}
@@ -1727,21 +1727,21 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
return ret;

if (vf_flush_wa) {
- intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6));
- intel_logical_ring_emit(ringbuf, 0);
- intel_logical_ring_emit(ringbuf, 0);
- intel_logical_ring_emit(ringbuf, 0);
- intel_logical_ring_emit(ringbuf, 0);
- intel_logical_ring_emit(ringbuf, 0);
+ intel_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6));
+ intel_ring_emit(ringbuf, 0);
+ intel_ring_emit(ringbuf, 0);
+ intel_ring_emit(ringbuf, 0);
+ intel_ring_emit(ringbuf, 0);
+ intel_ring_emit(ringbuf, 0);
}

- intel_logical_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6));
- intel_logical_ring_emit(ringbuf, flags);
- intel_logical_ring_emit(ringbuf, scratch_addr);
- intel_logical_ring_emit(ringbuf, 0);
- intel_logical_ring_emit(ringbuf, 0);
- intel_logical_ring_emit(ringbuf, 0);
- intel_logical_ring_advance(ringbuf);
+ intel_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6));
+ intel_ring_emit(ringbuf, flags);
+ intel_ring_emit(ringbuf, scratch_addr);
+ intel_ring_emit(ringbuf, 0);
+ intel_ring_emit(ringbuf, 0);
+ intel_ring_emit(ringbuf, 0);
+ intel_ring_advance(ringbuf);

return 0;
}
@@ -1786,23 +1786,23 @@ static int gen8_emit_request(struct drm_i915_gem_request *request)
cmd = MI_STORE_DWORD_IMM_GEN4;
cmd |= MI_GLOBAL_GTT;

- intel_logical_ring_emit(ringbuf, cmd);
- intel_logical_ring_emit(ringbuf,
- (ring->status_page.gfx_addr +
- (I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT)));
- intel_logical_ring_emit(ringbuf, 0);
- intel_logical_ring_emit(ringbuf, request->fence.seqno);
- intel_logical_ring_emit(ringbuf, MI_USER_INTERRUPT);
- intel_logical_ring_emit(ringbuf, MI_NOOP);
+ intel_ring_emit(ringbuf, cmd);
+ intel_ring_emit(ringbuf,
+ (ring->status_page.gfx_addr +
+ (I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT)));
+ intel_ring_emit(ringbuf, 0);
+ intel_ring_emit(ringbuf, request->fence.seqno);
+ intel_ring_emit(ringbuf, MI_USER_INTERRUPT);
+ intel_ring_emit(ringbuf, MI_NOOP);
intel_logical_ring_advance_and_submit(request);

/*
* Here we add two extra NOOPs as padding to avoid
* lite restore of a context with HEAD==TAIL.
*/
- intel_logical_ring_emit(ringbuf, MI_NOOP);
- intel_logical_ring_emit(ringbuf, MI_NOOP);
- intel_logical_ring_advance(ringbuf);
+ intel_ring_emit(ringbuf, MI_NOOP);
+ intel_ring_emit(ringbuf, MI_NOOP);
+ intel_ring_advance(ringbuf);

return 0;
}
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index 1e58f2550777..9d4aa699e593 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -63,32 +63,6 @@ int intel_logical_rings_init(struct drm_device *dev);
int intel_logical_ring_begin(struct drm_i915_gem_request *req, int num_dwords);

int logical_ring_flush_all_caches(struct drm_i915_gem_request *req);
-/**
- * intel_logical_ring_advance() - advance the ringbuffer tail
- * @ringbuf: Ringbuffer to advance.
- *
- * The tail is only updated in our logical ringbuffer struct.
- */
-static inline void intel_logical_ring_advance(struct intel_ringbuffer *ringbuf)
-{
- intel_ringbuffer_advance(ringbuf);
-}
-
-/**
- * intel_logical_ring_emit() - write a DWORD to the ringbuffer.
- * @ringbuf: Ringbuffer to write to.
- * @data: DWORD to write.
- */
-static inline void intel_logical_ring_emit(struct intel_ringbuffer *ringbuf,
- u32 data)
-{
- intel_ringbuffer_emit(ringbuf, data);
-}
-static inline void intel_logical_ring_emit_reg(struct intel_ringbuffer *ringbuf,
- i915_reg_t reg)
-{
- intel_logical_ring_emit(ringbuf, i915_mmio_reg_offset(reg));
-}

/* Logical Ring Contexts */

diff --git a/drivers/gpu/drm/i915/intel_mocs.c b/drivers/gpu/drm/i915/intel_mocs.c
index fed7bea19cc9..d8a7fdc7baeb 100644
--- a/drivers/gpu/drm/i915/intel_mocs.c
+++ b/drivers/gpu/drm/i915/intel_mocs.c
@@ -206,13 +206,11 @@ static int emit_mocs_control_table(struct drm_i915_gem_request *req,
return ret;
}

- intel_logical_ring_emit(ringbuf,
- MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES));
+ intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES));

for (index = 0; index < table->size; index++) {
- intel_logical_ring_emit_reg(ringbuf, mocs_register(ring, index));
- intel_logical_ring_emit(ringbuf,
- table->table[index].control_value);
+ intel_ring_emit_reg(ringbuf, mocs_register(ring, index));
+ intel_ring_emit(ringbuf, table->table[index].control_value);
}

/*
@@ -224,12 +222,12 @@ static int emit_mocs_control_table(struct drm_i915_gem_request *req,
* that value to all the used entries.
*/
for (; index < GEN9_NUM_MOCS_ENTRIES; index++) {
- intel_logical_ring_emit_reg(ringbuf, mocs_register(ring, index));
- intel_logical_ring_emit(ringbuf, table->table[0].control_value);
+ intel_ring_emit_reg(ringbuf, mocs_register(ring, index));
+ intel_ring_emit(ringbuf, table->table[0].control_value);
}

- intel_logical_ring_emit(ringbuf, MI_NOOP);
- intel_logical_ring_advance(ringbuf);
+ intel_ring_emit(ringbuf, MI_NOOP);
+ intel_ring_advance(ringbuf);

return 0;
}
@@ -265,15 +263,15 @@ static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
return ret;
}

- intel_logical_ring_emit(ringbuf,
+ intel_ring_emit(ringbuf,
MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES / 2));

for (i = 0, count = 0; i < table->size / 2; i++, count += 2) {
value = (table->table[count].l3cc_value & 0xffff) |
((table->table[count + 1].l3cc_value & 0xffff) << 16);

- intel_logical_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
- intel_logical_ring_emit(ringbuf, value);
+ intel_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
+ intel_ring_emit(ringbuf, value);
}

if (table->size & 0x01) {
@@ -289,14 +287,14 @@ static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
* they are reserved by the hardware.
*/
for (; i < GEN9_NUM_MOCS_ENTRIES / 2; i++) {
- intel_logical_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
- intel_logical_ring_emit(ringbuf, value);
+ intel_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
+ intel_ring_emit(ringbuf, value);

value = filler;
}

- intel_logical_ring_emit(ringbuf, MI_NOOP);
- intel_logical_ring_advance(ringbuf);
+ intel_ring_emit(ringbuf, MI_NOOP);
+ intel_ring_advance(ringbuf);

return 0;
}
diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c
index 76f1980a7541..6dca0e470e61 100644
--- a/drivers/gpu/drm/i915/intel_overlay.c
+++ b/drivers/gpu/drm/i915/intel_overlay.c
@@ -252,11 +252,11 @@ static int intel_overlay_on(struct intel_overlay *overlay)

overlay->active = true;

- intel_ring_emit(ring, MI_OVERLAY_FLIP | MI_OVERLAY_ON);
- intel_ring_emit(ring, overlay->flip_addr | OFC_UPDATE);
- intel_ring_emit(ring, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
- intel_ring_emit(ring, MI_NOOP);
- intel_ring_advance(ring);
+ intel_ring_emit(req->ringbuf, MI_OVERLAY_FLIP | MI_OVERLAY_ON);
+ intel_ring_emit(req->ringbuf, overlay->flip_addr | OFC_UPDATE);
+ intel_ring_emit(req->ringbuf, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
+ intel_ring_emit(req->ringbuf, MI_NOOP);
+ intel_ring_advance(req->ringbuf);

return intel_overlay_do_wait_request(overlay, req, NULL);
}
@@ -293,9 +293,9 @@ static int intel_overlay_continue(struct intel_overlay *overlay,
return ret;
}

- intel_ring_emit(ring, MI_OVERLAY_FLIP | MI_OVERLAY_CONTINUE);
- intel_ring_emit(ring, flip_addr);
- intel_ring_advance(ring);
+ intel_ring_emit(req->ringbuf, MI_OVERLAY_FLIP | MI_OVERLAY_CONTINUE);
+ intel_ring_emit(req->ringbuf, flip_addr);
+ intel_ring_advance(req->ringbuf);

WARN_ON(overlay->last_flip_req);
i915_gem_request_assign(&overlay->last_flip_req, req);
@@ -360,22 +360,22 @@ static int intel_overlay_off(struct intel_overlay *overlay)
}

/* wait for overlay to go idle */
- intel_ring_emit(ring, MI_OVERLAY_FLIP | MI_OVERLAY_CONTINUE);
- intel_ring_emit(ring, flip_addr);
- intel_ring_emit(ring, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
+ intel_ring_emit(req->ringbuf, MI_OVERLAY_FLIP | MI_OVERLAY_CONTINUE);
+ intel_ring_emit(req->ringbuf, flip_addr);
+ intel_ring_emit(req->ringbuf, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
/* turn overlay off */
if (IS_I830(dev)) {
/* Workaround: Don't disable the overlay fully, since otherwise
* it dies on the next OVERLAY_ON cmd. */
- intel_ring_emit(ring, MI_NOOP);
- intel_ring_emit(ring, MI_NOOP);
- intel_ring_emit(ring, MI_NOOP);
+ intel_ring_emit(req->ringbuf, MI_NOOP);
+ intel_ring_emit(req->ringbuf, MI_NOOP);
+ intel_ring_emit(req->ringbuf, MI_NOOP);
} else {
- intel_ring_emit(ring, MI_OVERLAY_FLIP | MI_OVERLAY_OFF);
- intel_ring_emit(ring, flip_addr);
- intel_ring_emit(ring, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
+ intel_ring_emit(req->ringbuf, MI_OVERLAY_FLIP | MI_OVERLAY_OFF);
+ intel_ring_emit(req->ringbuf, flip_addr);
+ intel_ring_emit(req->ringbuf, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
}
- intel_ring_advance(ring);
+ intel_ring_advance(req->ringbuf);

return intel_overlay_do_wait_request(overlay, req, intel_overlay_off_tail);
}
@@ -433,9 +433,9 @@ static int intel_overlay_release_old_vid(struct intel_overlay *overlay)
return ret;
}

- intel_ring_emit(ring, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
- intel_ring_emit(ring, MI_NOOP);
- intel_ring_advance(ring);
+ intel_ring_emit(req->ringbuf, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
+ intel_ring_emit(req->ringbuf, MI_NOOP);
+ intel_ring_advance(req->ringbuf);

ret = intel_overlay_do_wait_request(overlay, req,
intel_overlay_release_old_vid_tail);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index d17dd33ee94c..86c54584f64a 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -71,7 +71,7 @@ gen2_render_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains,
u32 flush_domains)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
u32 cmd;
int ret;

@@ -98,7 +98,7 @@ gen4_render_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains,
u32 flush_domains)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
u32 cmd;
int ret;

@@ -191,8 +191,8 @@ gen4_render_ring_flush(struct drm_i915_gem_request *req,
static int
intel_emit_post_sync_nonzero_flush(struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *ring = req->ring;
- u32 scratch_addr = ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
+ struct intel_ringbuffer *ring = req->ringbuf;
+ u32 scratch_addr = req->ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
int ret;

ret = intel_ring_begin(req, 6);
@@ -227,9 +227,9 @@ static int
gen6_render_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains, u32 flush_domains)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
u32 flags = 0;
- u32 scratch_addr = ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
+ u32 scratch_addr = req->ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
int ret;

/* Force SNB workarounds for PIPE_CONTROL flushes */
@@ -279,7 +279,7 @@ gen6_render_ring_flush(struct drm_i915_gem_request *req,
static int
gen7_render_ring_cs_stall_wa(struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
int ret;

ret = intel_ring_begin(req, 4);
@@ -300,9 +300,9 @@ static int
gen7_render_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains, u32 flush_domains)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
u32 flags = 0;
- u32 scratch_addr = ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
+ u32 scratch_addr = req->ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
int ret;

/*
@@ -363,7 +363,7 @@ static int
gen8_emit_pipe_control(struct drm_i915_gem_request *req,
u32 flags, u32 scratch_addr)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
int ret;

ret = intel_ring_begin(req, 6);
@@ -688,15 +688,15 @@ err:

static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
{
- int ret, i;
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
struct drm_i915_private *dev_priv = req->i915;
struct i915_workarounds *w = &dev_priv->workarounds;
+ int ret, i;

if (w->count == 0)
return 0;

- ring->gpu_caches_dirty = true;
+ req->ring->gpu_caches_dirty = true;
ret = intel_ring_flush_all_caches(req);
if (ret)
return ret;
@@ -714,7 +714,7 @@ static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)

intel_ring_advance(ring);

- ring->gpu_caches_dirty = true;
+ req->ring->gpu_caches_dirty = true;
ret = intel_ring_flush_all_caches(req);
if (ret)
return ret;
@@ -1191,7 +1191,7 @@ static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
unsigned int num_dwords)
{
#define MBOX_UPDATE_DWORDS 8
- struct intel_engine_cs *signaller = signaller_req->ring;
+ struct intel_ringbuffer *signaller = signaller_req->ringbuf;
struct drm_i915_private *dev_priv = signaller_req->i915;
struct intel_engine_cs *waiter;
int i, ret, num_rings;
@@ -1205,7 +1205,7 @@ static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
return ret;

for_each_ring(waiter, dev_priv, i) {
- u64 gtt_offset = signaller->semaphore.signal_ggtt[i];
+ u64 gtt_offset = signaller_req->ring->semaphore.signal_ggtt[i];
if (gtt_offset == MI_SEMAPHORE_SYNC_INVALID)
continue;

@@ -1229,7 +1229,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
unsigned int num_dwords)
{
#define MBOX_UPDATE_DWORDS 6
- struct intel_engine_cs *signaller = signaller_req->ring;
+ struct intel_ringbuffer *signaller = signaller_req->ringbuf;
struct drm_i915_private *dev_priv = signaller_req->i915;
struct intel_engine_cs *waiter;
int i, ret, num_rings;
@@ -1243,7 +1243,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
return ret;

for_each_ring(waiter, dev_priv, i) {
- u64 gtt_offset = signaller->semaphore.signal_ggtt[i];
+ u64 gtt_offset = signaller_req->ring->semaphore.signal_ggtt[i];
if (gtt_offset == MI_SEMAPHORE_SYNC_INVALID)
continue;

@@ -1264,7 +1264,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
static int gen6_signal(struct drm_i915_gem_request *signaller_req,
unsigned int num_dwords)
{
- struct intel_engine_cs *signaller = signaller_req->ring;
+ struct intel_ringbuffer *signaller = signaller_req->ringbuf;
struct drm_i915_private *dev_priv = signaller_req->i915;
struct intel_engine_cs *useless;
int i, ret, num_rings;
@@ -1279,7 +1279,7 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
return ret;

for_each_ring(useless, dev_priv, i) {
- i915_reg_t mbox_reg = signaller->semaphore.mbox.signal[i];
+ i915_reg_t mbox_reg = signaller_req->ring->semaphore.mbox.signal[i];

if (i915_mmio_reg_valid(mbox_reg)) {
intel_ring_emit(signaller, MI_LOAD_REGISTER_IMM(1));
@@ -1306,11 +1306,11 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
static int
gen6_add_request(struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
int ret;

- if (ring->semaphore.signal)
- ret = ring->semaphore.signal(req, 4);
+ if (req->ring->semaphore.signal)
+ ret = req->ring->semaphore.signal(req, 4);
else
ret = intel_ring_begin(req, 4);

@@ -1321,15 +1321,14 @@ gen6_add_request(struct drm_i915_gem_request *req)
intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
intel_ring_emit(ring, req->fence.seqno);
intel_ring_emit(ring, MI_USER_INTERRUPT);
- __intel_ring_advance(ring);
+ __intel_ring_advance(req->ring);

return 0;
}

-static inline bool i915_gem_has_seqno_wrapped(struct drm_device *dev,
+static inline bool i915_gem_has_seqno_wrapped(struct drm_i915_private *dev_priv,
u32 seqno)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
return dev_priv->last_seqno < seqno;
}

@@ -1346,7 +1345,7 @@ gen8_ring_sync(struct drm_i915_gem_request *waiter_req,
struct intel_engine_cs *signaller,
u32 seqno)
{
- struct intel_engine_cs *waiter = waiter_req->ring;
+ struct intel_ringbuffer *waiter = waiter_req->ringbuf;
struct drm_i915_private *dev_priv = waiter_req->i915;
int ret;

@@ -1360,9 +1359,11 @@ gen8_ring_sync(struct drm_i915_gem_request *waiter_req,
MI_SEMAPHORE_SAD_GTE_SDD);
intel_ring_emit(waiter, seqno);
intel_ring_emit(waiter,
- lower_32_bits(GEN8_WAIT_OFFSET(waiter, signaller->id)));
+ lower_32_bits(GEN8_WAIT_OFFSET(waiter_req->ring,
+ signaller->id)));
intel_ring_emit(waiter,
- upper_32_bits(GEN8_WAIT_OFFSET(waiter, signaller->id)));
+ upper_32_bits(GEN8_WAIT_OFFSET(waiter_req->ring,
+ signaller->id)));
intel_ring_advance(waiter);
return 0;
}
@@ -1372,11 +1373,11 @@ gen6_ring_sync(struct drm_i915_gem_request *waiter_req,
struct intel_engine_cs *signaller,
u32 seqno)
{
- struct intel_engine_cs *waiter = waiter_req->ring;
+ struct intel_ringbuffer *waiter = waiter_req->ringbuf;
u32 dw1 = MI_SEMAPHORE_MBOX |
MI_SEMAPHORE_COMPARE |
MI_SEMAPHORE_REGISTER;
- u32 wait_mbox = signaller->semaphore.mbox.wait[waiter->id];
+ u32 wait_mbox = signaller->semaphore.mbox.wait[waiter_req->ring->id];
int ret;

/* Throughout all of the GEM code, seqno passed implies our current
@@ -1392,7 +1393,7 @@ gen6_ring_sync(struct drm_i915_gem_request *waiter_req,
return ret;

/* If seqno wrap happened, omit the wait with no-ops */
- if (likely(!i915_gem_has_seqno_wrapped(waiter->dev, seqno))) {
+ if (likely(!i915_gem_has_seqno_wrapped(waiter_req->i915, seqno))) {
intel_ring_emit(waiter, dw1 | wait_mbox);
intel_ring_emit(waiter, seqno);
intel_ring_emit(waiter, 0);
@@ -1420,7 +1421,7 @@ do { \
static int
pc_render_add_request(struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
u32 addr = req->ring->status_page.gfx_addr +
(I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
u32 scratch_addr = addr;
@@ -1464,7 +1465,7 @@ pc_render_add_request(struct drm_i915_gem_request *req)
intel_ring_emit(ring, addr | PIPE_CONTROL_GLOBAL_GTT);
intel_ring_emit(ring, req->fence.seqno);
intel_ring_emit(ring, 0);
- __intel_ring_advance(ring);
+ __intel_ring_advance(req->ring);

return 0;
}
@@ -1547,7 +1548,7 @@ bsd_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains,
u32 flush_domains)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
int ret;

ret = intel_ring_begin(req, 2);
@@ -1563,7 +1564,7 @@ bsd_ring_flush(struct drm_i915_gem_request *req,
static int
i9xx_add_request(struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
int ret;

ret = intel_ring_begin(req, 4);
@@ -1574,7 +1575,7 @@ i9xx_add_request(struct drm_i915_gem_request *req)
intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
intel_ring_emit(ring, req->fence.seqno);
intel_ring_emit(ring, MI_USER_INTERRUPT);
- __intel_ring_advance(ring);
+ __intel_ring_advance(req->ring);

return 0;
}
@@ -1657,7 +1658,7 @@ i965_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 length,
unsigned dispatch_flags)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
int ret;

ret = intel_ring_begin(req, 2);
@@ -1684,8 +1685,8 @@ i830_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
- struct intel_engine_cs *ring = req->ring;
- u32 cs_offset = ring->scratch.gtt_offset;
+ struct intel_ringbuffer *ring = req->ringbuf;
+ u32 cs_offset = req->ring->scratch.gtt_offset;
int ret;

ret = intel_ring_begin(req, 6);
@@ -1747,7 +1748,7 @@ i915_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
int ret;

ret = intel_ring_begin(req, 2);
@@ -2256,8 +2257,8 @@ int intel_ring_begin(struct drm_i915_gem_request *req,
/* Align the ring tail to a cacheline boundary */
int intel_ring_cacheline_align(struct drm_i915_gem_request *req)
{
- struct intel_engine_cs *ring = req->ring;
- int num_dwords = (ring->buffer->tail & (CACHELINE_BYTES - 1)) / sizeof(uint32_t);
+ struct intel_ringbuffer *ring = req->ringbuf;
+ int num_dwords = (ring->tail & (CACHELINE_BYTES - 1)) / sizeof(uint32_t);
int ret;

if (num_dwords == 0)
@@ -2331,7 +2332,7 @@ static void gen6_bsd_ring_write_tail(struct intel_engine_cs *ring,
static int gen6_bsd_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate, u32 flush)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
uint32_t cmd;
int ret;

@@ -2340,7 +2341,7 @@ static int gen6_bsd_ring_flush(struct drm_i915_gem_request *req,
return ret;

cmd = MI_FLUSH_DW;
- if (INTEL_INFO(ring->dev)->gen >= 8)
+ if (INTEL_INFO(req->i915)->gen >= 8)
cmd += 1;

/* We always require a command barrier so that subsequent
@@ -2361,7 +2362,7 @@ static int gen6_bsd_ring_flush(struct drm_i915_gem_request *req,

intel_ring_emit(ring, cmd);
intel_ring_emit(ring, I915_GEM_HWS_SCRATCH_ADDR | MI_FLUSH_DW_USE_GTT);
- if (INTEL_INFO(ring->dev)->gen >= 8) {
+ if (INTEL_INFO(req->i915)->gen >= 8) {
intel_ring_emit(ring, 0); /* upper addr */
intel_ring_emit(ring, 0); /* value */
} else {
@@ -2377,7 +2378,7 @@ gen8_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
bool ppgtt = USES_PPGTT(req->i915) &&
!(dispatch_flags & I915_DISPATCH_SECURE);
int ret;
@@ -2403,7 +2404,7 @@ hsw_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
int ret;

ret = intel_ring_begin(req, 2);
@@ -2428,7 +2429,7 @@ gen6_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
int ret;

ret = intel_ring_begin(req, 2);
@@ -2451,7 +2452,7 @@ gen6_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
static int gen6_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate, u32 flush)
{
- struct intel_engine_cs *ring = req->ring;
+ struct intel_ringbuffer *ring = req->ringbuf;
uint32_t cmd;
int ret;

diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 7669a8d30f27..9c19a6ca8e7d 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -468,29 +468,20 @@ int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request);

int __must_check intel_ring_begin(struct drm_i915_gem_request *req, int n);
int __must_check intel_ring_cacheline_align(struct drm_i915_gem_request *req);
-static inline void intel_ringbuffer_emit(struct intel_ringbuffer *rb,
- u32 data)
+static inline void intel_ring_emit(struct intel_ringbuffer *rb,
+ u32 data)
{
*(uint32_t *)(rb->virtual_start + rb->tail) = data;
rb->tail += 4;
}
-static inline void intel_ringbuffer_advance(struct intel_ringbuffer *rb)
-{
- rb->tail &= rb->size - 1;
-}
-static inline void intel_ring_emit(struct intel_engine_cs *ring,
- u32 data)
-{
- intel_ringbuffer_emit(ring->buffer, data);
-}
-static inline void intel_ring_emit_reg(struct intel_engine_cs *ring,
+static inline void intel_ring_emit_reg(struct intel_ringbuffer *rb,
i915_reg_t reg)
{
- intel_ring_emit(ring, i915_mmio_reg_offset(reg));
+ intel_ring_emit(rb, i915_mmio_reg_offset(reg));
}
-static inline void intel_ring_advance(struct intel_engine_cs *ring)
+static inline void intel_ring_advance(struct intel_ringbuffer *rb)
{
- intel_ringbuffer_advance(ring->buffer);
+ rb->tail &= rb->size - 1;
}
int __intel_ring_space(int head, int tail, int size);
void intel_ring_update_space(struct intel_ringbuffer *ringbuf);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:51 UTC
Permalink
The request tells us where to read the ringbuf from, so use that
information to simplify the error capture. If no request was active at
the time of the hang, the ring is idle and there is no information
inside the ring pertaining to the hang.

Note carefully that this will reduce the amount of information stored in
the error state - any ring without an active request will not be
recorded.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Reviewed-by: Dave Gordon <***@intel.com>
---
drivers/gpu/drm/i915/i915_gpu_error.c | 28 ++++++++--------------------
1 file changed, 8 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 3e137fc701cf..93da2c7581f6 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -995,7 +995,6 @@ static void i915_gem_record_rings(struct drm_device *dev,

for (i = 0; i < I915_NUM_RINGS; i++) {
struct intel_engine_cs *ring = &dev_priv->ring[i];
- struct intel_ringbuffer *rbuf;

error->ring[i].pid = -1;

@@ -1009,6 +1008,7 @@ static void i915_gem_record_rings(struct drm_device *dev,
request = i915_gem_find_active_request(ring);
if (request) {
struct i915_address_space *vm;
+ struct intel_ringbuffer *rb;

vm = request->ctx && request->ctx->ppgtt ?
&request->ctx->ppgtt->base :
@@ -1039,26 +1039,14 @@ static void i915_gem_record_rings(struct drm_device *dev,
}
rcu_read_unlock();
}
- }

- if (i915.enable_execlists) {
- /* TODO: This is only a small fix to keep basic error
- * capture working, but we need to add more information
- * for it to be useful (e.g. dump the context being
- * executed).
- */
- if (request)
- rbuf = request->ctx->engine[ring->id].ringbuf;
- else
- rbuf = ring->default_context->engine[ring->id].ringbuf;
- } else
- rbuf = ring->buffer;
-
- error->ring[i].cpu_ring_head = rbuf->head;
- error->ring[i].cpu_ring_tail = rbuf->tail;
-
- error->ring[i].ringbuffer =
- i915_error_ggtt_object_create(dev_priv, rbuf->obj);
+ rb = request->ringbuf;
+ error->ring[i].cpu_ring_head = rb->head;
+ error->ring[i].cpu_ring_tail = rb->tail;
+ error->ring[i].ringbuffer =
+ i915_error_ggtt_object_create(dev_priv,
+ rb->obj);
+ }

error->ring[i].hws_page =
i915_error_ggtt_object_create(dev_priv, ring->status_page.obj);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:10 UTC
Permalink
Now that we have disambuigated ring and engine, we can use the clearer
and more consistent name for the intel_ringbuffer pointer in the
request.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem.c | 8 +-
drivers/gpu/drm/i915/i915_gem_context.c | 2 +-
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 4 +-
drivers/gpu/drm/i915/i915_gem_gtt.c | 6 +-
drivers/gpu/drm/i915/i915_gem_request.c | 20 ++--
drivers/gpu/drm/i915/i915_gem_request.h | 2 +-
drivers/gpu/drm/i915/i915_gpu_error.c | 31 +++---
drivers/gpu/drm/i915/i915_guc_submission.c | 4 +-
drivers/gpu/drm/i915/intel_display.c | 10 +-
drivers/gpu/drm/i915/intel_lrc.c | 152 ++++++++++++++---------------
drivers/gpu/drm/i915/intel_mocs.c | 34 +++----
drivers/gpu/drm/i915/intel_overlay.c | 42 ++++----
drivers/gpu/drm/i915/intel_ringbuffer.c | 86 ++++++++--------
13 files changed, 198 insertions(+), 203 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 6622c9bb3af8..430c439ece26 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4083,11 +4083,11 @@ int i915_gem_l3_remap(struct drm_i915_gem_request *req, int slice)
* at initialization time.
*/
for (i = 0; i < GEN7_L3LOG_SIZE / 4; i++) {
- intel_ring_emit(req->ringbuf, MI_LOAD_REGISTER_IMM(1));
- intel_ring_emit_reg(req->ringbuf, GEN7_L3LOG(slice, i));
- intel_ring_emit(req->ringbuf, remap_info[i]);
+ intel_ring_emit(req->ring, MI_LOAD_REGISTER_IMM(1));
+ intel_ring_emit_reg(req->ring, GEN7_L3LOG(slice, i));
+ intel_ring_emit(req->ring, remap_info[i]);
}
- intel_ring_advance(req->ringbuf);
+ intel_ring_advance(req->ring);

return ret;
}
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index dece033cf604..5b4e77a80c19 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -519,7 +519,7 @@ i915_gem_context_get(struct drm_i915_file_private *file_priv, u32 id)
static inline int
mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
u32 flags = hw_flags | MI_MM_SPACE_GTT;
const int num_rings =
/* Use an extended w/a on ivb+ if signalling from other rings */
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index e7df91f9a51f..a0f5a997c2f2 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1148,7 +1148,7 @@ i915_gem_execbuffer_retire_commands(struct i915_execbuffer_params *params)
static int
i915_reset_gen7_sol_offsets(struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
int ret, i;

if (!IS_GEN7(req->i915) || req->engine->id != RCS) {
@@ -1229,7 +1229,7 @@ i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
struct drm_i915_gem_execbuffer2 *args,
struct list_head *vmas)
{
- struct intel_ringbuffer *ring = params->request->ringbuf;
+ struct intel_ringbuffer *ring = params->request->ring;
struct drm_i915_private *dev_priv = params->request->i915;
u64 exec_start, exec_len;
int instp_mode;
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index cb7cb59d4c4a..38c109cda904 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -656,7 +656,7 @@ static int gen8_write_pdp(struct drm_i915_gem_request *req,
unsigned entry,
dma_addr_t addr)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
int ret;

BUG_ON(entry >= 4);
@@ -1648,7 +1648,7 @@ static uint32_t get_pd_offset(struct i915_hw_ppgtt *ppgtt)
static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
int ret;

/* NB: TLBs must be flushed and invalidated before a switch */
@@ -1686,7 +1686,7 @@ static int vgpu_mm_switch(struct i915_hw_ppgtt *ppgtt,
static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
int ret;

/* NB: TLBs must be flushed and invalidated before a switch */
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 8adf2c134048..4cc64d9cca12 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -255,7 +255,7 @@ int i915_gem_request_alloc(struct intel_engine_cs *engine,
* to be redone if the request is not actually submitted straight
* away, e.g. because a GPU scheduler has deferred it.
*/
- intel_ring_reserved_space_reserve(req->ringbuf,
+ intel_ring_reserved_space_reserve(req->ring,
MIN_SPACE_FOR_ADD_REQUEST);
ret = intel_ring_begin(req, 0);
if (ret) {
@@ -328,7 +328,7 @@ static void __i915_gem_request_release(struct drm_i915_gem_request *request)

void i915_gem_request_cancel(struct drm_i915_gem_request *req)
{
- intel_ring_reserved_space_cancel(req->ringbuf);
+ intel_ring_reserved_space_cancel(req->ring);
if (i915.enable_execlists) {
if (req->ctx != req->engine->default_context)
intel_lr_context_unpin(req);
@@ -349,7 +349,7 @@ static void i915_gem_request_retire(struct drm_i915_gem_request *request)
* Note this requires that we are always called in request
* completion order.
*/
- request->ringbuf->last_retired_head = request->postfix;
+ request->ring->last_retired_head = request->postfix;
__i915_gem_request_release(request);
}

@@ -401,23 +401,23 @@ void __i915_add_request(struct drm_i915_gem_request *request,
struct drm_i915_gem_object *obj,
bool flush_caches)
{
- struct intel_ringbuffer *ringbuf;
+ struct intel_ringbuffer *ring;
u32 request_start;
int ret;

if (WARN_ON(request == NULL))
return;

- ringbuf = request->ringbuf;
+ ring = request->ring;

/*
* To ensure that this call will not fail, space for its emissions
* should already have been reserved in the ring buffer. Let the ring
* know that it is time to use that space up.
*/
- intel_ring_reserved_space_use(ringbuf);
+ intel_ring_reserved_space_use(ring);

- request_start = intel_ring_get_tail(ringbuf);
+ request_start = intel_ring_get_tail(ring);
/*
* Emit any outstanding flushes - execbuf can fail to emit the flush
* after having emitted the batchbuffer command. Hence we need to fix
@@ -439,14 +439,14 @@ void __i915_add_request(struct drm_i915_gem_request *request,
* GPU processing the request, we never over-estimate the
* position of the head.
*/
- request->postfix = intel_ring_get_tail(ringbuf);
+ request->postfix = intel_ring_get_tail(ring);

if (i915.enable_execlists)
ret = request->engine->emit_request(request);
else {
ret = request->engine->add_request(request);

- request->tail = intel_ring_get_tail(ringbuf);
+ request->tail = intel_ring_get_tail(ring);
}
/* Not allowed to fail! */
WARN(ret, "emit|add_request failed: %d!\n", ret);
@@ -471,7 +471,7 @@ void __i915_add_request(struct drm_i915_gem_request *request,
i915_gem_mark_busy(request->i915);

/* Sanity check that the reserved size was large enough. */
- intel_ring_reserved_space_end(ringbuf);
+ intel_ring_reserved_space_end(ring);
}


diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index 802862e5007d..bd17e3a9a71d 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -79,7 +79,7 @@ struct drm_i915_gem_request {
* context.
*/
struct intel_context *ctx;
- struct intel_ringbuffer *ringbuf;
+ struct intel_ringbuffer *ring;

/** Batch buffer related to this request if any (used for
error state dump only) */
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 5bf208d8009e..b47ca1b7041f 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -993,21 +993,21 @@ static void i915_gem_record_rings(struct drm_device *dev,
int i, count;

for (i = 0; i < I915_NUM_RINGS; i++) {
- struct intel_engine_cs *ring = &dev_priv->ring[i];
+ struct intel_engine_cs *engine = &dev_priv->ring[i];

error->ring[i].pid = -1;

- if (ring->dev == NULL)
+ if (engine->dev == NULL)
continue;

error->ring[i].valid = true;

- i915_record_ring_state(dev, error, ring, &error->ring[i]);
+ i915_record_ring_state(dev, error, engine, &error->ring[i]);

- request = i915_gem_find_active_request(ring);
+ request = i915_gem_find_active_request(engine);
if (request) {
struct i915_address_space *vm;
- struct intel_ringbuffer *rb;
+ struct intel_ringbuffer *ring;

vm = request->ctx && request->ctx->ppgtt ?
&request->ctx->ppgtt->base :
@@ -1022,10 +1022,10 @@ static void i915_gem_record_rings(struct drm_device *dev,
request->batch_obj,
vm);

- if (HAS_BROKEN_CS_TLB(dev_priv->dev))
+ if (HAS_BROKEN_CS_TLB(dev_priv))
error->ring[i].wa_batchbuffer =
i915_error_ggtt_object_create(dev_priv,
- ring->scratch.obj);
+ engine->scratch.obj);

if (request->pid) {
struct task_struct *task;
@@ -1041,21 +1041,22 @@ static void i915_gem_record_rings(struct drm_device *dev,

error->simulated |= request->ctx->flags & CONTEXT_NO_ERROR_CAPTURE;

- rb = request->ringbuf;
- error->ring[i].cpu_ring_head = rb->head;
- error->ring[i].cpu_ring_tail = rb->tail;
+ ring = request->ring;
+ error->ring[i].cpu_ring_head = ring->head;
+ error->ring[i].cpu_ring_tail = ring->tail;
error->ring[i].ringbuffer =
i915_error_ggtt_object_create(dev_priv,
- rb->obj);
+ ring->obj);
}

error->ring[i].hws_page =
- i915_error_ggtt_object_create(dev_priv, ring->status_page.obj);
+ i915_error_ggtt_object_create(dev_priv,
+ engine->status_page.obj);

- i915_gem_record_active_context(ring, error, &error->ring[i]);
+ i915_gem_record_active_context(engine, error, &error->ring[i]);

count = 0;
- list_for_each_entry(request, &ring->request_list, list)
+ list_for_each_entry(request, &engine->request_list, list)
count++;

error->ring[i].num_requests = count;
@@ -1068,7 +1069,7 @@ static void i915_gem_record_rings(struct drm_device *dev,
}

count = 0;
- list_for_each_entry(request, &ring->request_list, list) {
+ list_for_each_entry(request, &engine->request_list, list) {
struct drm_i915_error_request *erq;

if (count >= error->ring[i].num_requests) {
diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
index eaf680ce5c9c..e82cc9182dfa 100644
--- a/drivers/gpu/drm/i915/i915_guc_submission.c
+++ b/drivers/gpu/drm/i915/i915_guc_submission.c
@@ -551,7 +551,7 @@ static int guc_add_workqueue_item(struct i915_guc_client *gc,
wqi->context_desc = (u32)intel_lr_context_descriptor(rq->ctx, rq->engine);

/* The GuC firmware wants the tail index in QWords, not bytes */
- tail = rq->ringbuf->tail >> 3;
+ tail = rq->ring->tail >> 3;
wqi->ring_tail = tail << WQ_RING_TAIL_SHIFT;
wqi->fence_id = 0; /*XXX: what fence to be here */

@@ -567,7 +567,7 @@ static void lr_context_update(struct drm_i915_gem_request *rq)
{
enum intel_ring_id ring_id = rq->engine->id;
struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[ring_id].state;
- struct drm_i915_gem_object *rb_obj = rq->ringbuf->obj;
+ struct drm_i915_gem_object *rb_obj = rq->ring->obj;
struct page *page;
uint32_t *reg_state;

diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 323b0d905c89..0d42356f15b4 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11052,7 +11052,7 @@ static int intel_gen2_queue_flip(struct drm_device *dev,
struct drm_i915_gem_request *req,
uint32_t flags)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
u32 flip_mask;
int ret;
@@ -11087,7 +11087,7 @@ static int intel_gen3_queue_flip(struct drm_device *dev,
struct drm_i915_gem_request *req,
uint32_t flags)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
u32 flip_mask;
int ret;
@@ -11119,7 +11119,7 @@ static int intel_gen4_queue_flip(struct drm_device *dev,
struct drm_i915_gem_request *req,
uint32_t flags)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
struct drm_i915_private *dev_priv = req->i915;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
uint32_t pf, pipesrc;
@@ -11158,7 +11158,7 @@ static int intel_gen6_queue_flip(struct drm_device *dev,
struct drm_i915_gem_request *req,
uint32_t flags)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
struct drm_i915_private *dev_priv = req->i915;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
uint32_t pf, pipesrc;
@@ -11194,7 +11194,7 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
struct drm_i915_gem_request *req,
uint32_t flags)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
uint32_t plane_bit = 0;
int len, ret;
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 1b70a76df31d..87d325b6e7dc 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -360,7 +360,7 @@ static int execlists_update_context(struct drm_i915_gem_request *rq)
{
struct i915_hw_ppgtt *ppgtt = rq->ctx->ppgtt;
struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[rq->engine->id].state;
- struct drm_i915_gem_object *rb_obj = rq->ringbuf->obj;
+ struct drm_i915_gem_object *rb_obj = rq->ring->obj;
struct page *page;
uint32_t *reg_state;

@@ -671,7 +671,7 @@ int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request
{
int ret;

- request->ringbuf = request->ctx->engine[request->engine->id].ringbuf;
+ request->ring = request->ctx->engine[request->engine->id].ringbuf;

if (request->ctx != request->engine->default_context) {
ret = intel_lr_context_pin(request);
@@ -709,8 +709,8 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
{
struct drm_i915_private *dev_priv = request->i915;

- intel_ring_advance(request->ringbuf);
- request->tail = request->ringbuf->tail;
+ intel_ring_advance(request->ring);
+ request->tail = request->ring->tail;

if (dev_priv->guc.execbuf_client)
i915_guc_submit(dev_priv->guc.execbuf_client, request);
@@ -740,9 +740,9 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
struct list_head *vmas)
{
struct drm_device *dev = params->dev;
- struct intel_engine_cs *ring = params->ring;
+ struct intel_engine_cs *engine = params->ring;
struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_ringbuffer *ringbuf = params->ctx->engine[ring->id].ringbuf;
+ struct intel_ringbuffer *ring = params->request->ring;
u64 exec_start;
int instp_mode;
u32 instp_mask;
@@ -754,7 +754,7 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
case I915_EXEC_CONSTANTS_REL_GENERAL:
case I915_EXEC_CONSTANTS_ABSOLUTE:
case I915_EXEC_CONSTANTS_REL_SURFACE:
- if (instp_mode != 0 && ring != &dev_priv->ring[RCS]) {
+ if (instp_mode != 0 && engine->id != RCS) {
DRM_DEBUG("non-0 rel constants mode on non-RCS\n");
return -EINVAL;
}
@@ -783,17 +783,17 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
if (ret)
return ret;

- if (ring == &dev_priv->ring[RCS] &&
+ if (engine->id == RCS &&
instp_mode != dev_priv->relative_constants_mode) {
ret = intel_ring_begin(params->request, 4);
if (ret)
return ret;

- intel_ring_emit(ringbuf, MI_NOOP);
- intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(1));
- intel_ring_emit_reg(ringbuf, INSTPM);
- intel_ring_emit(ringbuf, instp_mask << 16 | instp_mode);
- intel_ring_advance(ringbuf);
+ intel_ring_emit(ring, MI_NOOP);
+ intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
+ intel_ring_emit_reg(ring, INSTPM);
+ intel_ring_emit(ring, instp_mask << 16 | instp_mode);
+ intel_ring_advance(ring);

dev_priv->relative_constants_mode = instp_mode;
}
@@ -801,7 +801,7 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
exec_start = params->batch_obj_vm_offset +
args->batch_start_offset;

- ret = ring->emit_bb_start(params->request, exec_start, params->dispatch_flags);
+ ret = engine->emit_bb_start(params->request, exec_start, params->dispatch_flags);
if (ret)
return ret;

@@ -880,13 +880,12 @@ static int intel_lr_context_do_pin(struct intel_engine_cs *ring,
struct drm_i915_gem_object *ctx_obj,
struct intel_ringbuffer *ringbuf)
{
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct drm_i915_private *dev_priv = ring->i915;
int ret = 0;

WARN_ON(!mutex_is_locked(&ring->dev->struct_mutex));
ret = i915_gem_obj_ggtt_pin(ctx_obj, GEN8_LR_CONTEXT_ALIGN,
- PIN_OFFSET_BIAS | GUC_WOPCM_TOP);
+ PIN_OFFSET_BIAS | GUC_WOPCM_TOP);
if (ret)
return ret;

@@ -918,7 +917,7 @@ static int intel_lr_context_pin(struct drm_i915_gem_request *rq)

ret = intel_lr_context_do_pin(rq->engine,
rq->ctx->engine[engine].state,
- rq->ringbuf);
+ rq->ring);
if (ret) {
rq->ctx->engine[engine].pin_count = 0;
return ret;
@@ -932,12 +931,12 @@ void intel_lr_context_unpin(struct drm_i915_gem_request *rq)
{
int engine = rq->engine->id;
struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[engine].state;
- struct intel_ringbuffer *ringbuf = rq->ringbuf;
+ struct intel_ringbuffer *ring = rq->ring;

if (ctx_obj) {
WARN_ON(!mutex_is_locked(&rq->i915->dev->struct_mutex));
if (--rq->ctx->engine[engine].pin_count == 0) {
- intel_unpin_ringbuffer_obj(ringbuf);
+ intel_unpin_ringbuffer_obj(ring);
i915_gem_object_ggtt_unpin(ctx_obj);
i915_gem_context_unreference(rq->ctx);
}
@@ -948,7 +947,7 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
{
int ret, i;
struct intel_engine_cs *engine = req->engine;
- struct intel_ringbuffer *ringbuf = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
struct drm_i915_private *dev_priv = req->i915;
struct i915_workarounds *w = &dev_priv->workarounds;

@@ -964,14 +963,14 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
if (ret)
return ret;

- intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(w->count));
+ intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(w->count));
for (i = 0; i < w->count; i++) {
- intel_ring_emit_reg(ringbuf, w->reg[i].addr);
- intel_ring_emit(ringbuf, w->reg[i].value);
+ intel_ring_emit_reg(ring, w->reg[i].addr);
+ intel_ring_emit(ring, w->reg[i].value);
}
- intel_ring_emit(ringbuf, MI_NOOP);
+ intel_ring_emit(ring, MI_NOOP);

- intel_ring_advance(ringbuf);
+ intel_ring_advance(ring);

engine->gpu_caches_dirty = true;
ret = logical_ring_flush_all_caches(req);
@@ -1418,7 +1417,7 @@ static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
{
struct i915_hw_ppgtt *ppgtt = req->ctx->ppgtt;
struct intel_engine_cs *engine = req->engine;
- struct intel_ringbuffer *ringbuf = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
const int num_lri_cmds = GEN8_LEGACY_PDPES * 2;
int i, ret;

@@ -1426,18 +1425,18 @@ static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
if (ret)
return ret;

- intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(num_lri_cmds));
+ intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(num_lri_cmds));
for (i = GEN8_LEGACY_PDPES - 1; i >= 0; i--) {
const dma_addr_t pd_daddr = i915_page_dir_dma_addr(ppgtt, i);

- intel_ring_emit_reg(ringbuf, GEN8_RING_PDP_UDW(engine, i));
- intel_ring_emit(ringbuf, upper_32_bits(pd_daddr));
- intel_ring_emit_reg(ringbuf, GEN8_RING_PDP_LDW(engine, i));
- intel_ring_emit(ringbuf, lower_32_bits(pd_daddr));
+ intel_ring_emit_reg(ring, GEN8_RING_PDP_UDW(engine, i));
+ intel_ring_emit(ring, upper_32_bits(pd_daddr));
+ intel_ring_emit_reg(ring, GEN8_RING_PDP_LDW(engine, i));
+ intel_ring_emit(ring, lower_32_bits(pd_daddr));
}

- intel_ring_emit(ringbuf, MI_NOOP);
- intel_ring_advance(ringbuf);
+ intel_ring_emit(ring, MI_NOOP);
+ intel_ring_advance(ring);

return 0;
}
@@ -1445,7 +1444,7 @@ static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
u64 offset, unsigned dispatch_flags)
{
- struct intel_ringbuffer *ringbuf = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
bool ppgtt = !(dispatch_flags & I915_DISPATCH_SECURE);
int ret;

@@ -1472,14 +1471,14 @@ static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
return ret;

/* FIXME(BDW): Address space and security selectors. */
- intel_ring_emit(ringbuf, MI_BATCH_BUFFER_START_GEN8 |
+ intel_ring_emit(ring, MI_BATCH_BUFFER_START_GEN8 |
(ppgtt<<8) |
(dispatch_flags & I915_DISPATCH_RS ?
MI_BATCH_RESOURCE_STREAMER : 0));
- intel_ring_emit(ringbuf, lower_32_bits(offset));
- intel_ring_emit(ringbuf, upper_32_bits(offset));
- intel_ring_emit(ringbuf, MI_NOOP);
- intel_ring_advance(ringbuf);
+ intel_ring_emit(ring, lower_32_bits(offset));
+ intel_ring_emit(ring, upper_32_bits(offset));
+ intel_ring_emit(ring, MI_NOOP);
+ intel_ring_advance(ring);

return 0;
}
@@ -1504,10 +1503,7 @@ static int gen8_emit_flush(struct drm_i915_gem_request *request,
u32 invalidate_domains,
u32 unused)
{
- struct intel_ringbuffer *ringbuf = request->ringbuf;
- struct intel_engine_cs *ring = ringbuf->ring;
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
+ struct intel_ringbuffer *ring = request->ring;
uint32_t cmd;
int ret;

@@ -1526,17 +1522,17 @@ static int gen8_emit_flush(struct drm_i915_gem_request *request,

if (invalidate_domains & I915_GEM_GPU_DOMAINS) {
cmd |= MI_INVALIDATE_TLB;
- if (ring == &dev_priv->ring[VCS])
+ if (request->engine->id == VCS)
cmd |= MI_INVALIDATE_BSD;
}

- intel_ring_emit(ringbuf, cmd);
- intel_ring_emit(ringbuf,
+ intel_ring_emit(ring, cmd);
+ intel_ring_emit(ring,
I915_GEM_HWS_SCRATCH_ADDR |
MI_FLUSH_DW_USE_GTT);
- intel_ring_emit(ringbuf, 0); /* upper addr */
- intel_ring_emit(ringbuf, 0); /* value */
- intel_ring_advance(ringbuf);
+ intel_ring_emit(ring, 0); /* upper addr */
+ intel_ring_emit(ring, 0); /* value */
+ intel_ring_advance(ring);

return 0;
}
@@ -1545,9 +1541,8 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
u32 invalidate_domains,
u32 flush_domains)
{
- struct intel_ringbuffer *ringbuf = request->ringbuf;
- struct intel_engine_cs *ring = ringbuf->ring;
- u32 scratch_addr = ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
+ struct intel_ringbuffer *ring = request->ring;
+ u32 scratch_addr = request->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
bool vf_flush_wa = false;
u32 flags = 0;
int ret;
@@ -1574,7 +1569,7 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
* On GEN9: before VF_CACHE_INVALIDATE we need to emit a NULL
* pipe control.
*/
- if (IS_GEN9(ring->dev))
+ if (IS_GEN9(request->i915))
vf_flush_wa = true;
}

@@ -1583,21 +1578,21 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
return ret;

if (vf_flush_wa) {
- intel_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6));
- intel_ring_emit(ringbuf, 0);
- intel_ring_emit(ringbuf, 0);
- intel_ring_emit(ringbuf, 0);
- intel_ring_emit(ringbuf, 0);
- intel_ring_emit(ringbuf, 0);
+ intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(6));
+ intel_ring_emit(ring, 0);
+ intel_ring_emit(ring, 0);
+ intel_ring_emit(ring, 0);
+ intel_ring_emit(ring, 0);
+ intel_ring_emit(ring, 0);
}

- intel_ring_emit(ringbuf, GFX_OP_PIPE_CONTROL(6));
- intel_ring_emit(ringbuf, flags);
- intel_ring_emit(ringbuf, scratch_addr);
- intel_ring_emit(ringbuf, 0);
- intel_ring_emit(ringbuf, 0);
- intel_ring_emit(ringbuf, 0);
- intel_ring_advance(ringbuf);
+ intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(6));
+ intel_ring_emit(ring, flags);
+ intel_ring_emit(ring, scratch_addr);
+ intel_ring_emit(ring, 0);
+ intel_ring_emit(ring, 0);
+ intel_ring_emit(ring, 0);
+ intel_ring_advance(ring);

return 0;
}
@@ -1625,8 +1620,7 @@ gen6_seqno_barrier(struct intel_engine_cs *ring)

static int gen8_emit_request(struct drm_i915_gem_request *request)
{
- struct intel_ringbuffer *ringbuf = request->ringbuf;
- struct intel_engine_cs *ring = ringbuf->ring;
+ struct intel_ringbuffer *ring = request->ring;
u32 cmd;
int ret;

@@ -1642,23 +1636,23 @@ static int gen8_emit_request(struct drm_i915_gem_request *request)
cmd = MI_STORE_DWORD_IMM_GEN4;
cmd |= MI_GLOBAL_GTT;

- intel_ring_emit(ringbuf, cmd);
- intel_ring_emit(ringbuf,
- (ring->status_page.gfx_addr +
+ intel_ring_emit(ring, cmd);
+ intel_ring_emit(ring,
+ (request->engine->status_page.gfx_addr +
(I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT)));
- intel_ring_emit(ringbuf, 0);
- intel_ring_emit(ringbuf, request->fence.seqno);
- intel_ring_emit(ringbuf, MI_USER_INTERRUPT);
- intel_ring_emit(ringbuf, MI_NOOP);
+ intel_ring_emit(ring, 0);
+ intel_ring_emit(ring, request->fence.seqno);
+ intel_ring_emit(ring, MI_USER_INTERRUPT);
+ intel_ring_emit(ring, MI_NOOP);
intel_logical_ring_advance_and_submit(request);

/*
* Here we add two extra NOOPs as padding to avoid
* lite restore of a context with HEAD==TAIL.
*/
- intel_ring_emit(ringbuf, MI_NOOP);
- intel_ring_emit(ringbuf, MI_NOOP);
- intel_ring_advance(ringbuf);
+ intel_ring_emit(ring, MI_NOOP);
+ intel_ring_emit(ring, MI_NOOP);
+ intel_ring_advance(ring);

return 0;
}
diff --git a/drivers/gpu/drm/i915/intel_mocs.c b/drivers/gpu/drm/i915/intel_mocs.c
index 40041bebc3dc..039c7405f640 100644
--- a/drivers/gpu/drm/i915/intel_mocs.c
+++ b/drivers/gpu/drm/i915/intel_mocs.c
@@ -191,9 +191,9 @@ static i915_reg_t mocs_register(enum intel_ring_id ring, int index)
*/
static int emit_mocs_control_table(struct drm_i915_gem_request *req,
const struct drm_i915_mocs_table *table,
- enum intel_ring_id ring)
+ enum intel_ring_id id)
{
- struct intel_ringbuffer *ringbuf = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
unsigned int index;
int ret;

@@ -204,11 +204,11 @@ static int emit_mocs_control_table(struct drm_i915_gem_request *req,
if (ret)
return ret;

- intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES));
+ intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES));

for (index = 0; index < table->size; index++) {
- intel_ring_emit_reg(ringbuf, mocs_register(ring, index));
- intel_ring_emit(ringbuf, table->table[index].control_value);
+ intel_ring_emit_reg(ring, mocs_register(id, index));
+ intel_ring_emit(ring, table->table[index].control_value);
}

/*
@@ -220,12 +220,12 @@ static int emit_mocs_control_table(struct drm_i915_gem_request *req,
* that value to all the used entries.
*/
for (; index < GEN9_NUM_MOCS_ENTRIES; index++) {
- intel_ring_emit_reg(ringbuf, mocs_register(ring, index));
- intel_ring_emit(ringbuf, table->table[0].control_value);
+ intel_ring_emit_reg(ring, mocs_register(id, index));
+ intel_ring_emit(ring, table->table[0].control_value);
}

- intel_ring_emit(ringbuf, MI_NOOP);
- intel_ring_advance(ringbuf);
+ intel_ring_emit(ring, MI_NOOP);
+ intel_ring_advance(ring);

return 0;
}
@@ -244,7 +244,7 @@ static int emit_mocs_control_table(struct drm_i915_gem_request *req,
static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
const struct drm_i915_mocs_table *table)
{
- struct intel_ringbuffer *ringbuf = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
unsigned int count;
unsigned int i;
u32 value;
@@ -259,15 +259,15 @@ static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
if (ret)
return ret;

- intel_ring_emit(ringbuf,
+ intel_ring_emit(ring,
MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES / 2));

for (i = 0, count = 0; i < table->size / 2; i++, count += 2) {
value = (table->table[count].l3cc_value & 0xffff) |
((table->table[count + 1].l3cc_value & 0xffff) << 16);

- intel_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
- intel_ring_emit(ringbuf, value);
+ intel_ring_emit_reg(ring, GEN9_LNCFCMOCS(i));
+ intel_ring_emit(ring, value);
}

if (table->size & 0x01) {
@@ -283,14 +283,14 @@ static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
* they are reserved by the hardware.
*/
for (; i < GEN9_NUM_MOCS_ENTRIES / 2; i++) {
- intel_ring_emit_reg(ringbuf, GEN9_LNCFCMOCS(i));
- intel_ring_emit(ringbuf, value);
+ intel_ring_emit_reg(ring, GEN9_LNCFCMOCS(i));
+ intel_ring_emit(ring, value);

value = filler;
}

- intel_ring_emit(ringbuf, MI_NOOP);
- intel_ring_advance(ringbuf);
+ intel_ring_emit(ring, MI_NOOP);
+ intel_ring_advance(ring);

return 0;
}
diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c
index 6dca0e470e61..cb73d16848b0 100644
--- a/drivers/gpu/drm/i915/intel_overlay.c
+++ b/drivers/gpu/drm/i915/intel_overlay.c
@@ -252,11 +252,11 @@ static int intel_overlay_on(struct intel_overlay *overlay)

overlay->active = true;

- intel_ring_emit(req->ringbuf, MI_OVERLAY_FLIP | MI_OVERLAY_ON);
- intel_ring_emit(req->ringbuf, overlay->flip_addr | OFC_UPDATE);
- intel_ring_emit(req->ringbuf, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
- intel_ring_emit(req->ringbuf, MI_NOOP);
- intel_ring_advance(req->ringbuf);
+ intel_ring_emit(req->ring, MI_OVERLAY_FLIP | MI_OVERLAY_ON);
+ intel_ring_emit(req->ring, overlay->flip_addr | OFC_UPDATE);
+ intel_ring_emit(req->ring, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
+ intel_ring_emit(req->ring, MI_NOOP);
+ intel_ring_advance(req->ring);

return intel_overlay_do_wait_request(overlay, req, NULL);
}
@@ -293,9 +293,9 @@ static int intel_overlay_continue(struct intel_overlay *overlay,
return ret;
}

- intel_ring_emit(req->ringbuf, MI_OVERLAY_FLIP | MI_OVERLAY_CONTINUE);
- intel_ring_emit(req->ringbuf, flip_addr);
- intel_ring_advance(req->ringbuf);
+ intel_ring_emit(req->ring, MI_OVERLAY_FLIP | MI_OVERLAY_CONTINUE);
+ intel_ring_emit(req->ring, flip_addr);
+ intel_ring_advance(req->ring);

WARN_ON(overlay->last_flip_req);
i915_gem_request_assign(&overlay->last_flip_req, req);
@@ -360,22 +360,22 @@ static int intel_overlay_off(struct intel_overlay *overlay)
}

/* wait for overlay to go idle */
- intel_ring_emit(req->ringbuf, MI_OVERLAY_FLIP | MI_OVERLAY_CONTINUE);
- intel_ring_emit(req->ringbuf, flip_addr);
- intel_ring_emit(req->ringbuf, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
+ intel_ring_emit(req->ring, MI_OVERLAY_FLIP | MI_OVERLAY_CONTINUE);
+ intel_ring_emit(req->ring, flip_addr);
+ intel_ring_emit(req->ring, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
/* turn overlay off */
if (IS_I830(dev)) {
/* Workaround: Don't disable the overlay fully, since otherwise
* it dies on the next OVERLAY_ON cmd. */
- intel_ring_emit(req->ringbuf, MI_NOOP);
- intel_ring_emit(req->ringbuf, MI_NOOP);
- intel_ring_emit(req->ringbuf, MI_NOOP);
+ intel_ring_emit(req->ring, MI_NOOP);
+ intel_ring_emit(req->ring, MI_NOOP);
+ intel_ring_emit(req->ring, MI_NOOP);
} else {
- intel_ring_emit(req->ringbuf, MI_OVERLAY_FLIP | MI_OVERLAY_OFF);
- intel_ring_emit(req->ringbuf, flip_addr);
- intel_ring_emit(req->ringbuf, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
+ intel_ring_emit(req->ring, MI_OVERLAY_FLIP | MI_OVERLAY_OFF);
+ intel_ring_emit(req->ring, flip_addr);
+ intel_ring_emit(req->ring, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
}
- intel_ring_advance(req->ringbuf);
+ intel_ring_advance(req->ring);

return intel_overlay_do_wait_request(overlay, req, intel_overlay_off_tail);
}
@@ -433,9 +433,9 @@ static int intel_overlay_release_old_vid(struct intel_overlay *overlay)
return ret;
}

- intel_ring_emit(req->ringbuf, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
- intel_ring_emit(req->ringbuf, MI_NOOP);
- intel_ring_advance(req->ringbuf);
+ intel_ring_emit(req->ring, MI_WAIT_FOR_EVENT | MI_WAIT_FOR_OVERLAY_FLIP);
+ intel_ring_emit(req->ring, MI_NOOP);
+ intel_ring_advance(req->ring);

ret = intel_overlay_do_wait_request(overlay, req,
intel_overlay_release_old_vid_tail);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 072fd0fc7748..ae00e79c9c99 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -71,7 +71,7 @@ gen2_render_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains,
u32 flush_domains)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
u32 cmd;
int ret;

@@ -98,7 +98,7 @@ gen4_render_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains,
u32 flush_domains)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
u32 cmd;
int ret;

@@ -191,7 +191,7 @@ gen4_render_ring_flush(struct drm_i915_gem_request *req,
static int
intel_emit_post_sync_nonzero_flush(struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
u32 scratch_addr = req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
int ret;

@@ -227,7 +227,7 @@ static int
gen6_render_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains, u32 flush_domains)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
u32 flags = 0;
u32 scratch_addr = req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
int ret;
@@ -279,7 +279,7 @@ gen6_render_ring_flush(struct drm_i915_gem_request *req,
static int
gen7_render_ring_cs_stall_wa(struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
int ret;

ret = intel_ring_begin(req, 4);
@@ -300,7 +300,7 @@ static int
gen7_render_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains, u32 flush_domains)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
u32 flags = 0;
u32 scratch_addr = req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
int ret;
@@ -363,7 +363,7 @@ static int
gen8_emit_pipe_control(struct drm_i915_gem_request *req,
u32 flags, u32 scratch_addr)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
int ret;

ret = intel_ring_begin(req, 6);
@@ -688,7 +688,7 @@ err:

static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
struct drm_i915_private *dev_priv = req->i915;
struct i915_workarounds *w = &dev_priv->workarounds;
int ret, i;
@@ -1191,7 +1191,7 @@ static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
unsigned int num_dwords)
{
#define MBOX_UPDATE_DWORDS 8
- struct intel_ringbuffer *signaller = signaller_req->ringbuf;
+ struct intel_ringbuffer *signaller = signaller_req->ring;
struct drm_i915_private *dev_priv = signaller_req->i915;
struct intel_engine_cs *waiter;
int i, ret, num_rings;
@@ -1229,7 +1229,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
unsigned int num_dwords)
{
#define MBOX_UPDATE_DWORDS 6
- struct intel_ringbuffer *signaller = signaller_req->ringbuf;
+ struct intel_ringbuffer *signaller = signaller_req->ring;
struct drm_i915_private *dev_priv = signaller_req->i915;
struct intel_engine_cs *waiter;
int i, ret, num_rings;
@@ -1264,7 +1264,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
static int gen6_signal(struct drm_i915_gem_request *signaller_req,
unsigned int num_dwords)
{
- struct intel_ringbuffer *signaller = signaller_req->ringbuf;
+ struct intel_ringbuffer *signaller = signaller_req->ring;
struct drm_i915_private *dev_priv = signaller_req->i915;
struct intel_engine_cs *useless;
int i, ret, num_rings;
@@ -1306,7 +1306,7 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
static int
gen6_add_request(struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
int ret;

if (req->engine->semaphore.signal)
@@ -1345,7 +1345,7 @@ gen8_ring_sync(struct drm_i915_gem_request *waiter_req,
struct intel_engine_cs *signaller,
u32 seqno)
{
- struct intel_ringbuffer *waiter = waiter_req->ringbuf;
+ struct intel_ringbuffer *waiter = waiter_req->ring;
struct drm_i915_private *dev_priv = waiter_req->i915;
int ret;

@@ -1373,7 +1373,7 @@ gen6_ring_sync(struct drm_i915_gem_request *waiter_req,
struct intel_engine_cs *signaller,
u32 seqno)
{
- struct intel_ringbuffer *waiter = waiter_req->ringbuf;
+ struct intel_ringbuffer *waiter = waiter_req->ring;
u32 dw1 = MI_SEMAPHORE_MBOX |
MI_SEMAPHORE_COMPARE |
MI_SEMAPHORE_REGISTER;
@@ -1421,7 +1421,7 @@ do { \
static int
pc_render_add_request(struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
u32 addr = req->engine->status_page.gfx_addr +
(I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
u32 scratch_addr = addr;
@@ -1548,7 +1548,7 @@ bsd_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains,
u32 flush_domains)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
int ret;

ret = intel_ring_begin(req, 2);
@@ -1564,7 +1564,7 @@ bsd_ring_flush(struct drm_i915_gem_request *req,
static int
i9xx_add_request(struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
int ret;

ret = intel_ring_begin(req, 4);
@@ -1658,7 +1658,7 @@ i965_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 length,
unsigned dispatch_flags)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
int ret;

ret = intel_ring_begin(req, 2);
@@ -1685,7 +1685,7 @@ i830_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
u32 cs_offset = req->engine->scratch.gtt_offset;
int ret;

@@ -1748,7 +1748,7 @@ i915_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
int ret;

ret = intel_ring_begin(req, 2);
@@ -2082,7 +2082,7 @@ int intel_ring_idle(struct intel_engine_cs *ring)

int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request)
{
- request->ringbuf = request->engine->buffer;
+ request->ring = request->engine->buffer;
return 0;
}

@@ -2135,17 +2135,17 @@ void intel_ring_reserved_space_end(struct intel_ringbuffer *ringbuf)

static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
{
- struct intel_ringbuffer *ringbuf = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
struct intel_engine_cs *engine = req->engine;
struct drm_i915_gem_request *target;
unsigned space;
int ret;

- if (intel_ring_space(ringbuf) >= bytes)
+ if (intel_ring_space(ring) >= bytes)
return 0;

/* The whole point of reserving space is to not wait! */
- WARN_ON(ringbuf->reserved_in_use);
+ WARN_ON(ring->reserved_in_use);

list_for_each_entry(target, &engine->request_list, list) {
/*
@@ -2153,12 +2153,12 @@ static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
* from multiple ringbuffers. Here, we must ignore any that
* aren't from the ringbuffer we're considering.
*/
- if (target->ringbuf != ringbuf)
+ if (target->ring != ring)
continue;

/* Would completion of this request free enough space? */
- space = __intel_ring_space(target->postfix, ringbuf->tail,
- ringbuf->size);
+ space = __intel_ring_space(target->postfix, ring->tail,
+ ring->size);
if (space >= bytes)
break;
}
@@ -2170,7 +2170,7 @@ static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
if (ret)
return ret;

- ringbuf->space = space;
+ ring->space = space;
return 0;
}

@@ -2185,16 +2185,16 @@ static void ring_wrap(struct intel_ringbuffer *ringbuf)

static int ring_prepare(struct drm_i915_gem_request *req, int bytes)
{
- struct intel_ringbuffer *ringbuf = req->ringbuf;
- int remain_usable = ringbuf->effective_size - ringbuf->tail;
- int remain_actual = ringbuf->size - ringbuf->tail;
+ struct intel_ringbuffer *ring = req->ring;
+ int remain_usable = ring->effective_size - ring->tail;
+ int remain_actual = ring->size - ring->tail;
int ret, total_bytes, wait_bytes = 0;
bool need_wrap = false;

- if (ringbuf->reserved_in_use)
+ if (ring->reserved_in_use)
total_bytes = bytes;
else
- total_bytes = bytes + ringbuf->reserved_size;
+ total_bytes = bytes + ring->reserved_size;

if (unlikely(bytes > remain_usable)) {
/*
@@ -2210,9 +2210,9 @@ static int ring_prepare(struct drm_i915_gem_request *req, int bytes)
* falls off the end. So only need to to wait for the
* reserved size after flushing out the remainder.
*/
- wait_bytes = remain_actual + ringbuf->reserved_size;
+ wait_bytes = remain_actual + ring->reserved_size;
need_wrap = true;
- } else if (total_bytes > ringbuf->space) {
+ } else if (total_bytes > ring->space) {
/* No wrapping required, just waiting. */
wait_bytes = total_bytes;
}
@@ -2224,7 +2224,7 @@ static int ring_prepare(struct drm_i915_gem_request *req, int bytes)
return ret;

if (need_wrap)
- ring_wrap(ringbuf);
+ ring_wrap(ring);
}

return 0;
@@ -2238,14 +2238,14 @@ int intel_ring_begin(struct drm_i915_gem_request *req, int num_dwords)
if (ret)
return ret;

- req->ringbuf->space -= num_dwords * sizeof(uint32_t);
+ req->ring->space -= num_dwords * sizeof(uint32_t);
return 0;
}

/* Align the ring tail to a cacheline boundary */
int intel_ring_cacheline_align(struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
int num_dwords = (ring->tail & (CACHELINE_BYTES - 1)) / sizeof(uint32_t);
int ret;

@@ -2320,7 +2320,7 @@ static void gen6_bsd_ring_write_tail(struct intel_engine_cs *ring,
static int gen6_bsd_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate, u32 flush)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
uint32_t cmd;
int ret;

@@ -2366,7 +2366,7 @@ gen8_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
bool ppgtt = USES_PPGTT(req->i915) &&
!(dispatch_flags & I915_DISPATCH_SECURE);
int ret;
@@ -2392,7 +2392,7 @@ hsw_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
int ret;

ret = intel_ring_begin(req, 2);
@@ -2417,7 +2417,7 @@ gen6_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
int ret;

ret = intel_ring_begin(req, 2);
@@ -2440,7 +2440,7 @@ gen6_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
static int gen6_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate, u32 flush)
{
- struct intel_ringbuffer *ring = req->ringbuf;
+ struct intel_ringbuffer *ring = req->ring;
uint32_t cmd;
int ret;
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:19 UTC
Permalink
If we, when we store the reset_counter for the operation, we ensure that
it is not in a wedged or in the middle of a reset, we can then assert that
if any reset occurs the reset_counter must change. Later we can just
compare the operation's reset epoch against the current counter to see
if we need to abort the operation (to handle the hang).

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: Daniel Vetter <***@ffwll.ch>
Reviewed-by: Daniel Vetter <***@ffwll.ch>
---
drivers/gpu/drm/i915/intel_display.c | 16 +++++++++-------
1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 0933bdbaa935..183c05bdb220 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -3288,14 +3288,12 @@ void intel_finish_reset(struct drm_device *dev)
static bool intel_crtc_has_pending_flip(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
unsigned reset_counter;
bool pending;

- reset_counter = i915_reset_counter(&dev_priv->gpu_error);
- if (intel_crtc->reset_counter != reset_counter ||
- __i915_reset_in_progress_or_wedged(reset_counter))
+ reset_counter = i915_reset_counter(&to_i915(dev)->gpu_error);
+ if (intel_crtc->reset_counter != reset_counter)
return false;

spin_lock_irq(&dev->event_lock);
@@ -11011,8 +11009,7 @@ static bool page_flip_finished(struct intel_crtc *crtc)
unsigned reset_counter;

reset_counter = i915_reset_counter(&dev_priv->gpu_error);
- if (crtc->reset_counter != reset_counter ||
- __i915_reset_in_progress_or_wedged(reset_counter))
+ if (crtc->reset_counter != reset_counter)
return true;

/*
@@ -11668,8 +11665,13 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
if (ret)
goto cleanup;

- atomic_inc(&intel_crtc->unpin_work_count);
intel_crtc->reset_counter = i915_reset_counter(&dev_priv->gpu_error);
+ if (__i915_reset_in_progress_or_wedged(intel_crtc->reset_counter)) {
+ ret = -EIO;
+ goto cleanup;
+ }
+
+ atomic_inc(&intel_crtc->unpin_work_count);

if (INTEL_INFO(dev)->gen >= 5 || IS_G4X(dev))
work->flip_count = I915_READ(PIPE_FLIPCOUNT_G4X(pipe)) + 1;
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:26 UTC
Permalink
The queue only ever contains at most one item and has no special flags.
It is just a very simple wrapper around the system-wq - a complication
with no benefits.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_dma.c | 11 -----------
drivers/gpu/drm/i915/i915_drv.h | 1 -
drivers/gpu/drm/i915/i915_irq.c | 6 +++---
3 files changed, 3 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
index 44a896ce32e6..9e49e304dd8e 100644
--- a/drivers/gpu/drm/i915/i915_dma.c
+++ b/drivers/gpu/drm/i915/i915_dma.c
@@ -1016,14 +1016,6 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
goto out_freewq;
}

- dev_priv->gpu_error.hangcheck_wq =
- alloc_ordered_workqueue("i915-hangcheck", 0);
- if (dev_priv->gpu_error.hangcheck_wq == NULL) {
- DRM_ERROR("Failed to create our hangcheck workqueue.\n");
- ret = -ENOMEM;
- goto out_freedpwq;
- }
-
intel_irq_init(dev_priv);
intel_uncore_sanitize(dev);

@@ -1105,8 +1097,6 @@ out_gem_unload:
intel_teardown_gmbus(dev);
intel_teardown_mchbar(dev);
pm_qos_remove_request(&dev_priv->pm_qos);
- destroy_workqueue(dev_priv->gpu_error.hangcheck_wq);
-out_freedpwq:
destroy_workqueue(dev_priv->hotplug.dp_wq);
out_freewq:
destroy_workqueue(dev_priv->wq);
@@ -1209,7 +1199,6 @@ int i915_driver_unload(struct drm_device *dev)

destroy_workqueue(dev_priv->hotplug.dp_wq);
destroy_workqueue(dev_priv->wq);
- destroy_workqueue(dev_priv->gpu_error.hangcheck_wq);
pm_qos_remove_request(&dev_priv->pm_qos);

i915_global_gtt_cleanup(dev);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index d9d411919779..188bed933f11 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1330,7 +1330,6 @@ struct i915_gpu_error {
/* Hang gpu twice in this window and your context gets banned */
#define DRM_I915_CTX_BAN_PERIOD DIV_ROUND_UP(8*DRM_I915_HANGCHECK_PERIOD, 1000)

- struct workqueue_struct *hangcheck_wq;
struct delayed_work hangcheck_work;

/* For reset and error_state handling. */
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 94f5f4e99446..8939438d747d 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -3175,7 +3175,7 @@ out:

void i915_queue_hangcheck(struct drm_i915_private *dev_priv)
{
- struct i915_gpu_error *e = &dev_priv->gpu_error;
+ unsigned long delay;

if (!i915.enable_hangcheck)
return;
@@ -3185,8 +3185,8 @@ void i915_queue_hangcheck(struct drm_i915_private *dev_priv)
* we will ignore a hung ring if a second ring is kept busy.
*/

- queue_delayed_work(e->hangcheck_wq, &e->hangcheck_work,
- round_jiffies_up_relative(DRM_I915_HANGCHECK_JIFFIES));
+ delay = round_jiffies_up_relative(DRM_I915_HANGCHECK_JIFFIES);
+ schedule_delayed_work(&dev_priv->gpu_error.hangcheck_work, delay);
}

static void ibx_irq_reset(struct drm_device *dev)
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:54 UTC
Permalink
As we only ever keep the first error state around, we can avoid some
work that can be quite intrusive if we don't record the error the second
time around. This does move the race whereby the user could discard one
error state as the second is being captured, but that race exists in the
current code and we hope that recapturing error state is only done for
debugging.

Note that as we discard the error state for simulated errors, igt that
exercise error capture continue to function.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gpu_error.c | 3 +++
1 file changed, 3 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 4f17d6847569..86f582115313 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1312,6 +1312,9 @@ void i915_capture_error_state(struct drm_device *dev, bool wedged,
struct drm_i915_error_state *error;
unsigned long flags;

+ if (READ_ONCE(dev_priv->gpu_error.first_error))
+ return;
+
/* Account for pipe specific data like PIPE*STAT */
error = kzalloc(sizeof(*error), GFP_ATOMIC);
if (!error) {
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:02 UTC
Permalink
I have instances where I want to use drm_malloc_ab() but with a custom
gfp mask. And with those, where I want a temporary allocation, I want to
try a high-order kmalloc() before using a vmalloc().

So refactor my usage into drm_malloc_gfp().

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: dri-***@lists.freedesktop.org
Cc: Ville Syrjälä <***@linux.intel.com>
Reviewed-by: Ville Syrjälä <***@linux.intel.com>
Acked-by: Dave Airlie <***@redhat.com>
---
drivers/gpu/drm/i915/i915_gem.c | 4 +---
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 8 +++-----
drivers/gpu/drm/i915/i915_gem_gtt.c | 5 +++--
drivers/gpu/drm/i915/i915_gem_userptr.c | 15 ++++-----------
include/drm/drm_mem_util.h | 19 +++++++++++++++++++
5 files changed, 30 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 2912e8714f5b..a4f9c5bbb883 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2040,9 +2040,7 @@ void *i915_gem_object_pin_vmap(struct drm_i915_gem_object *obj)
int n;

n = obj->base.size >> PAGE_SHIFT;
- pages = kmalloc(n*sizeof(*pages), GFP_TEMPORARY | __GFP_NOWARN);
- if (pages == NULL)
- pages = drm_malloc_ab(n, sizeof(*pages));
+ pages = drm_malloc_gfp(n, sizeof(*pages), GFP_TEMPORARY);
if (pages != NULL) {
n = 0;
for_each_sg_page(obj->pages->sgl, &sg_iter, obj->pages->nents, 0)
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index da1c6fe5b40e..dfabeee2ff0b 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1766,11 +1766,9 @@ i915_gem_execbuffer2(struct drm_device *dev, void *data,
return -EINVAL;
}

- exec2_list = kmalloc(sizeof(*exec2_list)*args->buffer_count,
- GFP_TEMPORARY | __GFP_NOWARN | __GFP_NORETRY);
- if (exec2_list == NULL)
- exec2_list = drm_malloc_ab(sizeof(*exec2_list),
- args->buffer_count);
+ exec2_list = drm_malloc_gfp(sizeof(*exec2_list),
+ args->buffer_count,
+ GFP_TEMPORARY);
if (exec2_list == NULL) {
DRM_DEBUG("Failed to allocate exec list for %d buffers\n",
args->buffer_count);
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 56f4f2e58d53..224fe89baca3 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -3376,8 +3376,9 @@ intel_rotate_fb_obj_pages(struct i915_ggtt_view *ggtt_view,
int ret = -ENOMEM;

/* Allocate a temporary list of source pages for random access. */
- page_addr_list = drm_malloc_ab(obj->base.size / PAGE_SIZE,
- sizeof(dma_addr_t));
+ page_addr_list = drm_malloc_gfp(obj->base.size / PAGE_SIZE,
+ sizeof(dma_addr_t),
+ GFP_TEMPORARY);
if (!page_addr_list)
return ERR_PTR(ret);

diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index 1a5f89dba4af..251e81c4b0ea 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -573,10 +573,7 @@ __i915_gem_userptr_get_pages_worker(struct work_struct *_work)
ret = -ENOMEM;
pinned = 0;

- pvec = kmalloc(npages*sizeof(struct page *),
- GFP_TEMPORARY | __GFP_NOWARN | __GFP_NORETRY);
- if (pvec == NULL)
- pvec = drm_malloc_ab(npages, sizeof(struct page *));
+ pvec = drm_malloc_gfp(npages, sizeof(struct page *), GFP_TEMPORARY);
if (pvec != NULL) {
struct mm_struct *mm = obj->userptr.mm->mm;

@@ -713,14 +710,10 @@ i915_gem_userptr_get_pages(struct drm_i915_gem_object *obj)
pvec = NULL;
pinned = 0;
if (obj->userptr.mm->mm == current->mm) {
- pvec = kmalloc(num_pages*sizeof(struct page *),
- GFP_TEMPORARY | __GFP_NOWARN | __GFP_NORETRY);
+ pvec = drm_malloc_gfp(num_pages, sizeof(struct page *), GFP_TEMPORARY);
if (pvec == NULL) {
- pvec = drm_malloc_ab(num_pages, sizeof(struct page *));
- if (pvec == NULL) {
- __i915_gem_userptr_set_active(obj, false);
- return -ENOMEM;
- }
+ __i915_gem_userptr_set_active(obj, false);
+ return -ENOMEM;
}

pinned = __get_user_pages_fast(obj->userptr.ptr, num_pages,
diff --git a/include/drm/drm_mem_util.h b/include/drm/drm_mem_util.h
index e42495ad8136..741ce75a72b4 100644
--- a/include/drm/drm_mem_util.h
+++ b/include/drm/drm_mem_util.h
@@ -54,6 +54,25 @@ static __inline__ void *drm_malloc_ab(size_t nmemb, size_t size)
GFP_KERNEL | __GFP_HIGHMEM, PAGE_KERNEL);
}

+static __inline__ void *drm_malloc_gfp(size_t nmemb, size_t size, gfp_t gfp)
+{
+ if (size != 0 && nmemb > SIZE_MAX / size)
+ return NULL;
+
+ if (size * nmemb <= PAGE_SIZE)
+ return kmalloc(nmemb * size, gfp);
+
+ if (gfp & __GFP_RECLAIMABLE) {
+ void *ptr = kmalloc(nmemb * size,
+ gfp | __GFP_NOWARN | __GFP_NORETRY);
+ if (ptr)
+ return ptr;
+ }
+
+ return __vmalloc(size * nmemb,
+ gfp | __GFP_HIGHMEM, PAGE_KERNEL);
+}
+
static __inline void drm_free_large(void *ptr)
{
kvfree(ptr);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:58 UTC
Permalink
Now that we derive requests from struct fence, swap over to its
nomenclature for references. It's shorter and more idiomatic across the
kernel.

s/i915_gem_request_reference/i915_gem_request_get/
s/i915_gem_request_unreference/i915_gem_request_put/

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem.c | 14 +++++++-------
drivers/gpu/drm/i915/i915_gem_request.c | 2 +-
drivers/gpu/drm/i915/i915_gem_request.h | 8 ++++----
drivers/gpu/drm/i915/intel_breadcrumbs.c | 4 ++--
drivers/gpu/drm/i915/intel_display.c | 4 ++--
drivers/gpu/drm/i915/intel_lrc.c | 4 ++--
drivers/gpu/drm/i915/intel_pm.c | 5 ++---
7 files changed, 20 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 6d8d65304abf..fd61e722b595 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1185,7 +1185,7 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
if (req == NULL)
return 0;

- requests[n++] = i915_gem_request_reference(req);
+ requests[n++] = i915_gem_request_get(req);
} else {
for (i = 0; i < I915_NUM_RINGS; i++) {
struct drm_i915_gem_request *req;
@@ -1194,7 +1194,7 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
if (req == NULL)
continue;

- requests[n++] = i915_gem_request_reference(req);
+ requests[n++] = i915_gem_request_get(req);
}
}

@@ -1207,7 +1207,7 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
for (i = 0; i < n; i++) {
if (ret == 0)
i915_gem_object_retire_request(obj, requests[i]);
- i915_gem_request_unreference(requests[i]);
+ i915_gem_request_put(requests[i]);
}

return ret;
@@ -2492,7 +2492,7 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
if (obj->last_read_req[i] == NULL)
continue;

- req[n++] = i915_gem_request_reference(obj->last_read_req[i]);
+ req[n++] = i915_gem_request_get(obj->last_read_req[i]);
}

mutex_unlock(&dev->struct_mutex);
@@ -2502,7 +2502,7 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
ret = __i915_wait_request(req[i], true,
args->timeout_ns > 0 ? &args->timeout_ns : NULL,
to_rps_client(file));
- i915_gem_request_unreference(req[i]);
+ i915_gem_request_put(req[i]);
}
return ret;

@@ -3498,14 +3498,14 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
target = request;
}
if (target)
- i915_gem_request_reference(target);
+ i915_gem_request_get(target);
spin_unlock(&file_priv->mm.lock);

if (target == NULL)
return 0;

ret = __i915_wait_request(target, true, NULL, NULL);
- i915_gem_request_unreference(target);
+ i915_gem_request_put(target);

return ret;
}
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index e366ca0dcd99..a796dbd1b0e4 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -326,7 +326,7 @@ static void __i915_gem_request_release(struct drm_i915_gem_request *request)
i915_gem_request_remove_from_client(request);

i915_gem_context_unreference(request->ctx);
- i915_gem_request_unreference(request);
+ i915_gem_request_put(request);
}

void i915_gem_request_cancel(struct drm_i915_gem_request *req)
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index b55d0b7c7f2a..0ab14fd0fce0 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -147,13 +147,13 @@ to_request(struct fence *fence)
}

static inline struct drm_i915_gem_request *
-i915_gem_request_reference(struct drm_i915_gem_request *req)
+i915_gem_request_get(struct drm_i915_gem_request *req)
{
return to_request(fence_get(&req->fence));
}

static inline void
-i915_gem_request_unreference(struct drm_i915_gem_request *req)
+i915_gem_request_put(struct drm_i915_gem_request *req)
{
fence_put(&req->fence);
}
@@ -162,10 +162,10 @@ static inline void i915_gem_request_assign(struct drm_i915_gem_request **pdst,
struct drm_i915_gem_request *src)
{
if (src)
- i915_gem_request_reference(src);
+ i915_gem_request_get(src);

if (*pdst)
- i915_gem_request_unreference(*pdst);
+ i915_gem_request_put(*pdst);

*pdst = src;
}
diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
index 61e18cb90850..aca1b72edcd8 100644
--- a/drivers/gpu/drm/i915/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
@@ -391,7 +391,7 @@ static int intel_breadcrumbs_signaler(void *arg)
intel_engine_remove_wait(engine, &signal->wait);

fence_signal(&signal->request->fence);
- i915_gem_request_unreference(signal->request);
+ i915_gem_request_put(signal->request);

/* Find the next oldest signal. Note that as we have
* not been holding the lock, another client may
@@ -459,7 +459,7 @@ int intel_engine_enable_signaling(struct drm_i915_gem_request *request)
signal->wait.task = task;
signal->wait.seqno = request->fence.seqno;

- signal->request = i915_gem_request_reference(request);
+ signal->request = i915_gem_request_get(request);

/* Insert ourselves into the retirement ordered list of signals
* on this engine. We track the oldest seqno as that will be the
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 32885b8d5c02..ae247927e931 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11431,7 +11431,7 @@ static void intel_mmio_flip_work_func(struct work_struct *work)
WARN_ON(__i915_wait_request(mmio_flip->req,
false, NULL,
&mmio_flip->i915->rps.mmioflips));
- i915_gem_request_unreference(mmio_flip->req);
+ i915_gem_request_put(mmio_flip->req);
}

/* For framebuffer backed by dmabuf, wait for fence */
@@ -11455,7 +11455,7 @@ static int intel_queue_mmio_flip(struct drm_device *dev,
return -ENOMEM;

mmio_flip->i915 = to_i915(dev);
- mmio_flip->req = i915_gem_request_reference(obj->last_write_req);
+ mmio_flip->req = i915_gem_request_get(obj->last_write_req);
mmio_flip->crtc = to_intel_crtc(crtc);
mmio_flip->rotation = crtc->primary->state->rotation;

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index f43a94ae5c76..433e9f60e926 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -587,7 +587,7 @@ static int execlists_context_queue(struct drm_i915_gem_request *request)
struct drm_i915_gem_request *cursor;
int num_elements = 0;

- i915_gem_request_reference(request);
+ i915_gem_request_get(request);

spin_lock_irq(&ring->execlist_lock);

@@ -983,7 +983,7 @@ void intel_execlists_retire_requests(struct intel_engine_cs *ring)
if (ctx_obj && (ctx != ring->default_context))
intel_lr_context_unpin(req);
list_del(&req->execlist_link);
- i915_gem_request_unreference(req);
+ i915_gem_request_put(req);
}
}

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 0e13135aefaa..39b7ca9c3e66 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -7289,7 +7289,7 @@ static void __intel_rps_boost_work(struct work_struct *work)
gen6_rps_boost(to_i915(req->ring->dev), NULL,
req->emitted_jiffies);

- i915_gem_request_unreference(req);
+ i915_gem_request_put(req);
kfree(boost);
}

@@ -7308,8 +7308,7 @@ void intel_queue_rps_boost_for_request(struct drm_device *dev,
if (boost == NULL)
return;

- i915_gem_request_reference(req);
- boost->req = req;
+ boost->req = i915_gem_request_get(req);

INIT_WORK(&boost->work, __intel_rps_boost_work);
queue_work(to_i915(dev)->wq, &boost->work);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:32 UTC
Permalink
By using the same address for storing the HWS on every platform, we can
remove the platform specific vfuncs and reduce the get-seqno routine to
a single read of a cached memory location.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 10 ++--
drivers/gpu/drm/i915/i915_drv.h | 4 +-
drivers/gpu/drm/i915/i915_gpu_error.c | 2 +-
drivers/gpu/drm/i915/i915_irq.c | 4 +-
drivers/gpu/drm/i915/i915_trace.h | 2 +-
drivers/gpu/drm/i915/intel_breadcrumbs.c | 4 +-
drivers/gpu/drm/i915/intel_lrc.c | 46 ++---------------
drivers/gpu/drm/i915/intel_ringbuffer.c | 86 ++++++++------------------------
drivers/gpu/drm/i915/intel_ringbuffer.h | 7 +--
9 files changed, 43 insertions(+), 122 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index d09e48455dcb..5a706c700684 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -600,7 +600,7 @@ static int i915_gem_pageflip_info(struct seq_file *m, void *data)
ring->name,
i915_gem_request_get_seqno(work->flip_queued_req),
dev_priv->next_seqno,
- ring->get_seqno(ring),
+ intel_ring_get_seqno(ring),
i915_gem_request_completed(work->flip_queued_req));
} else
seq_printf(m, "Flip not associated with any ring\n");
@@ -732,10 +732,8 @@ static void i915_ring_seqno_info(struct seq_file *m,
{
struct rb_node *rb;

- if (ring->get_seqno) {
- seq_printf(m, "Current sequence (%s): %x\n",
- ring->name, ring->get_seqno(ring));
- }
+ seq_printf(m, "Current sequence (%s): %x\n",
+ ring->name, intel_ring_get_seqno(ring));

spin_lock(&ring->breadcrumbs.lock);
for (rb = rb_first(&ring->breadcrumbs.waiters);
@@ -1355,7 +1353,7 @@ static int i915_hangcheck_info(struct seq_file *m, void *unused)

for_each_ring(ring, dev_priv, i) {
acthd[i] = intel_ring_get_active_head(ring);
- seqno[i] = ring->get_seqno(ring);
+ seqno[i] = intel_ring_get_seqno(ring);
}

i915_get_extra_instdone(dev, instdone);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 44d46018ee13..fcedcbc50834 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2971,13 +2971,13 @@ i915_seqno_passed(uint32_t seq1, uint32_t seq2)

static inline bool i915_gem_request_started(struct drm_i915_gem_request *req)
{
- return i915_seqno_passed(req->ring->get_seqno(req->ring),
+ return i915_seqno_passed(intel_ring_get_seqno(req->ring),
req->previous_seqno);
}

static inline bool i915_gem_request_completed(struct drm_i915_gem_request *req)
{
- return i915_seqno_passed(req->ring->get_seqno(req->ring),
+ return i915_seqno_passed(intel_ring_get_seqno(req->ring),
req->seqno);
}

diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 01d0206ca4dd..3e137fc701cf 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -903,7 +903,7 @@ static void i915_record_ring_state(struct drm_device *dev,
ering->waiting = intel_engine_has_waiter(ring);
ering->instpm = I915_READ(RING_INSTPM(ring->mmio_base));
ering->acthd = intel_ring_get_active_head(ring);
- ering->seqno = ring->get_seqno(ring);
+ ering->seqno = intel_ring_get_seqno(ring);
ering->start = I915_READ_START(ring);
ering->head = I915_READ_HEAD(ring);
ering->tail = I915_READ_TAIL(ring);
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index d73669783045..627c7fb6aa9b 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -2903,7 +2903,7 @@ static int semaphore_passed(struct intel_engine_cs *ring)
if (signaller->hangcheck.deadlock >= I915_NUM_RINGS)
return -1;

- if (i915_seqno_passed(signaller->get_seqno(signaller), seqno))
+ if (i915_seqno_passed(intel_ring_get_seqno(signaller), seqno))
return 1;

/* cursory check for an unkickable deadlock */
@@ -3068,7 +3068,7 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
semaphore_clear_deadlocks(dev_priv);

acthd = intel_ring_get_active_head(ring);
- seqno = ring->get_seqno(ring);
+ seqno = intel_ring_get_seqno(ring);

if (ring->hangcheck.seqno == seqno) {
if (ring_idle(ring, seqno)) {
diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
index cfb5f78a6e84..efca75bcace3 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -573,7 +573,7 @@ TRACE_EVENT(i915_gem_request_notify,
TP_fast_assign(
__entry->dev = ring->dev->primary->index;
__entry->ring = ring->id;
- __entry->seqno = ring->get_seqno(ring);
+ __entry->seqno = intel_ring_get_seqno(ring);
),

TP_printk("dev=%u, ring=%u, seqno=%u",
diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
index 10b0add54acf..f66acf820c40 100644
--- a/drivers/gpu/drm/i915/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
@@ -127,7 +127,7 @@ bool intel_engine_add_wait(struct intel_engine_cs *engine,
struct intel_wait *wait)
{
struct intel_breadcrumbs *b = &engine->breadcrumbs;
- u32 seqno = engine->get_seqno(engine);
+ u32 seqno = intel_ring_get_seqno(engine);
struct rb_node **p, *parent, *completed;
bool first;

@@ -269,7 +269,7 @@ void intel_engine_remove_wait(struct intel_engine_cs *engine,
* the first_waiter. This is undesirable if that
* waiter is a high priority task.
*/
- u32 seqno = engine->get_seqno(engine);
+ u32 seqno = intel_ring_get_seqno(engine);
while (i915_seqno_passed(seqno,
to_wait(next)->seqno)) {
struct rb_node *n = rb_next(next);
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 333e95bda78a..ad51b1fc37cd 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1775,16 +1775,6 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
return 0;
}

-static u32 gen8_get_seqno(struct intel_engine_cs *ring)
-{
- return intel_read_status_page(ring, I915_GEM_HWS_INDEX);
-}
-
-static void gen8_set_seqno(struct intel_engine_cs *ring, u32 seqno)
-{
- intel_write_status_page(ring, I915_GEM_HWS_INDEX, seqno);
-}
-
static void bxt_seqno_barrier(struct intel_engine_cs *ring)
{
/*
@@ -1800,14 +1790,6 @@ static void bxt_seqno_barrier(struct intel_engine_cs *ring)
intel_flush_status_page(ring, I915_GEM_HWS_INDEX);
}

-static void bxt_a_set_seqno(struct intel_engine_cs *ring, u32 seqno)
-{
- intel_write_status_page(ring, I915_GEM_HWS_INDEX, seqno);
-
- /* See bxt_a_get_seqno() explaining the reason for the clflush. */
- intel_flush_status_page(ring, I915_GEM_HWS_INDEX);
-}
-
static int gen8_emit_request(struct drm_i915_gem_request *request)
{
struct intel_ringbuffer *ringbuf = request->ringbuf;
@@ -1832,7 +1814,7 @@ static int gen8_emit_request(struct drm_i915_gem_request *request)
(ring->status_page.gfx_addr +
(I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT)));
intel_logical_ring_emit(ringbuf, 0);
- intel_logical_ring_emit(ringbuf, i915_gem_request_get_seqno(request));
+ intel_logical_ring_emit(ringbuf, request->seqno);
intel_logical_ring_emit(ringbuf, MI_USER_INTERRUPT);
intel_logical_ring_emit(ringbuf, MI_NOOP);
intel_logical_ring_advance_and_submit(request);
@@ -2002,12 +1984,8 @@ static int logical_render_ring_init(struct drm_device *dev)
ring->init_hw = gen8_init_render_ring;
ring->init_context = gen8_init_rcs_context;
ring->cleanup = intel_fini_pipe_control;
- ring->get_seqno = gen8_get_seqno;
- ring->set_seqno = gen8_set_seqno;
- if (IS_BXT_REVID(dev, 0, BXT_REVID_A1)) {
+ if (IS_BXT_REVID(dev, 0, BXT_REVID_A1))
ring->irq_seqno_barrier = bxt_seqno_barrier;
- ring->set_seqno = bxt_a_set_seqno;
- }
ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush_render;
ring->irq_get = gen8_logical_ring_get_irq;
@@ -2053,12 +2031,8 @@ static int logical_bsd_ring_init(struct drm_device *dev)
GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS1_IRQ_SHIFT;

ring->init_hw = gen8_init_common_ring;
- ring->get_seqno = gen8_get_seqno;
- ring->set_seqno = gen8_set_seqno;
- if (IS_BXT_REVID(dev, 0, BXT_REVID_A1)) {
+ if (IS_BXT_REVID(dev, 0, BXT_REVID_A1))
ring->irq_seqno_barrier = bxt_seqno_barrier;
- ring->set_seqno = bxt_a_set_seqno;
- }
ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush;
ring->irq_get = gen8_logical_ring_get_irq;
@@ -2082,8 +2056,6 @@ static int logical_bsd2_ring_init(struct drm_device *dev)
GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS2_IRQ_SHIFT;

ring->init_hw = gen8_init_common_ring;
- ring->get_seqno = gen8_get_seqno;
- ring->set_seqno = gen8_set_seqno;
ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush;
ring->irq_get = gen8_logical_ring_get_irq;
@@ -2107,12 +2079,8 @@ static int logical_blt_ring_init(struct drm_device *dev)
GT_CONTEXT_SWITCH_INTERRUPT << GEN8_BCS_IRQ_SHIFT;

ring->init_hw = gen8_init_common_ring;
- ring->get_seqno = gen8_get_seqno;
- ring->set_seqno = gen8_set_seqno;
- if (IS_BXT_REVID(dev, 0, BXT_REVID_A1)) {
+ if (IS_BXT_REVID(dev, 0, BXT_REVID_A1))
ring->irq_seqno_barrier = bxt_seqno_barrier;
- ring->set_seqno = bxt_a_set_seqno;
- }
ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush;
ring->irq_get = gen8_logical_ring_get_irq;
@@ -2136,12 +2104,8 @@ static int logical_vebox_ring_init(struct drm_device *dev)
GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VECS_IRQ_SHIFT;

ring->init_hw = gen8_init_common_ring;
- ring->get_seqno = gen8_get_seqno;
- ring->set_seqno = gen8_set_seqno;
- if (IS_BXT_REVID(dev, 0, BXT_REVID_A1)) {
+ if (IS_BXT_REVID(dev, 0, BXT_REVID_A1))
ring->irq_seqno_barrier = bxt_seqno_barrier;
- ring->set_seqno = bxt_a_set_seqno;
- }
ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush;
ring->irq_get = gen8_logical_ring_get_irq;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 57ec21c5b1ab..c86d0e17d785 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1216,19 +1216,17 @@ static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
return ret;

for_each_ring(waiter, dev_priv, i) {
- u32 seqno;
u64 gtt_offset = signaller->semaphore.signal_ggtt[i];
if (gtt_offset == MI_SEMAPHORE_SYNC_INVALID)
continue;

- seqno = i915_gem_request_get_seqno(signaller_req);
intel_ring_emit(signaller, GFX_OP_PIPE_CONTROL(6));
intel_ring_emit(signaller, PIPE_CONTROL_GLOBAL_GTT_IVB |
PIPE_CONTROL_QW_WRITE |
PIPE_CONTROL_FLUSH_ENABLE);
intel_ring_emit(signaller, lower_32_bits(gtt_offset));
intel_ring_emit(signaller, upper_32_bits(gtt_offset));
- intel_ring_emit(signaller, seqno);
+ intel_ring_emit(signaller, signaller_req->seqno);
intel_ring_emit(signaller, 0);
intel_ring_emit(signaller, MI_SEMAPHORE_SIGNAL |
MI_SEMAPHORE_TARGET(waiter->id));
@@ -1257,18 +1255,16 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
return ret;

for_each_ring(waiter, dev_priv, i) {
- u32 seqno;
u64 gtt_offset = signaller->semaphore.signal_ggtt[i];
if (gtt_offset == MI_SEMAPHORE_SYNC_INVALID)
continue;

- seqno = i915_gem_request_get_seqno(signaller_req);
intel_ring_emit(signaller, (MI_FLUSH_DW + 1) |
MI_FLUSH_DW_OP_STOREDW);
intel_ring_emit(signaller, lower_32_bits(gtt_offset) |
MI_FLUSH_DW_USE_GTT);
intel_ring_emit(signaller, upper_32_bits(gtt_offset));
- intel_ring_emit(signaller, seqno);
+ intel_ring_emit(signaller, signaller_req->seqno);
intel_ring_emit(signaller, MI_SEMAPHORE_SIGNAL |
MI_SEMAPHORE_TARGET(waiter->id));
intel_ring_emit(signaller, 0);
@@ -1299,11 +1295,9 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
i915_reg_t mbox_reg = signaller->semaphore.mbox.signal[i];

if (i915_mmio_reg_valid(mbox_reg)) {
- u32 seqno = i915_gem_request_get_seqno(signaller_req);
-
intel_ring_emit(signaller, MI_LOAD_REGISTER_IMM(1));
intel_ring_emit_reg(signaller, mbox_reg);
- intel_ring_emit(signaller, seqno);
+ intel_ring_emit(signaller, signaller_req->seqno);
}
}

@@ -1338,7 +1332,7 @@ gen6_add_request(struct drm_i915_gem_request *req)

intel_ring_emit(ring, MI_STORE_DWORD_INDEX);
intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
- intel_ring_emit(ring, i915_gem_request_get_seqno(req));
+ intel_ring_emit(ring, req->seqno);
intel_ring_emit(ring, MI_USER_INTERRUPT);
__intel_ring_advance(ring);

@@ -1440,7 +1434,9 @@ static int
pc_render_add_request(struct drm_i915_gem_request *req)
{
struct intel_engine_cs *ring = req->ring;
- u32 scratch_addr = ring->scratch.gtt_offset + 2 * CACHELINE_BYTES;
+ u32 addr = req->ring->status_page.gfx_addr +
+ (I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
+ u32 scratch_addr = addr;
int ret;

/* For Ironlake, MI_USER_INTERRUPT was deprecated and apparently
@@ -1455,11 +1451,12 @@ pc_render_add_request(struct drm_i915_gem_request *req)
if (ret)
return ret;

- intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE |
- PIPE_CONTROL_WRITE_FLUSH |
- PIPE_CONTROL_TEXTURE_CACHE_INVALIDATE);
- intel_ring_emit(ring, ring->scratch.gtt_offset | PIPE_CONTROL_GLOBAL_GTT);
- intel_ring_emit(ring, i915_gem_request_get_seqno(req));
+ intel_ring_emit(ring,
+ GFX_OP_PIPE_CONTROL(4) |
+ PIPE_CONTROL_QW_WRITE |
+ PIPE_CONTROL_WRITE_FLUSH);
+ intel_ring_emit(ring, addr | PIPE_CONTROL_GLOBAL_GTT);
+ intel_ring_emit(ring, req->seqno);
intel_ring_emit(ring, 0);
PIPE_CONTROL_FLUSH(ring, scratch_addr);
scratch_addr += 2 * CACHELINE_BYTES; /* write to separate cachelines */
@@ -1473,12 +1470,12 @@ pc_render_add_request(struct drm_i915_gem_request *req)
scratch_addr += 2 * CACHELINE_BYTES;
PIPE_CONTROL_FLUSH(ring, scratch_addr);

- intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(4) | PIPE_CONTROL_QW_WRITE |
+ intel_ring_emit(ring, GFX_OP_PIPE_CONTROL(4) |
+ PIPE_CONTROL_QW_WRITE |
PIPE_CONTROL_WRITE_FLUSH |
- PIPE_CONTROL_TEXTURE_CACHE_INVALIDATE |
PIPE_CONTROL_NOTIFY);
- intel_ring_emit(ring, ring->scratch.gtt_offset | PIPE_CONTROL_GLOBAL_GTT);
- intel_ring_emit(ring, i915_gem_request_get_seqno(req));
+ intel_ring_emit(ring, addr | PIPE_CONTROL_GLOBAL_GTT);
+ intel_ring_emit(ring, req->seqno);
intel_ring_emit(ring, 0);
__intel_ring_advance(ring);

@@ -1506,30 +1503,6 @@ gen6_seqno_barrier(struct intel_engine_cs *ring)
intel_flush_status_page(ring, I915_GEM_HWS_INDEX);
}

-static u32
-ring_get_seqno(struct intel_engine_cs *ring)
-{
- return intel_read_status_page(ring, I915_GEM_HWS_INDEX);
-}
-
-static void
-ring_set_seqno(struct intel_engine_cs *ring, u32 seqno)
-{
- intel_write_status_page(ring, I915_GEM_HWS_INDEX, seqno);
-}
-
-static u32
-pc_render_get_seqno(struct intel_engine_cs *ring)
-{
- return ring->scratch.cpu_page[0];
-}
-
-static void
-pc_render_set_seqno(struct intel_engine_cs *ring, u32 seqno)
-{
- ring->scratch.cpu_page[0] = seqno;
-}
-
static bool
gen5_ring_get_irq(struct intel_engine_cs *ring)
{
@@ -1665,7 +1638,7 @@ i9xx_add_request(struct drm_i915_gem_request *req)

intel_ring_emit(ring, MI_STORE_DWORD_INDEX);
intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
- intel_ring_emit(ring, i915_gem_request_get_seqno(req));
+ intel_ring_emit(ring, req->seqno);
intel_ring_emit(ring, MI_USER_INTERRUPT);
__intel_ring_advance(ring);

@@ -2457,7 +2430,10 @@ void intel_ring_init_seqno(struct intel_engine_cs *ring, u32 seqno)
I915_WRITE(RING_SYNC_2(ring->mmio_base), 0);
}

- ring->set_seqno(ring, seqno);
+ intel_write_status_page(ring, I915_GEM_HWS_INDEX, seqno);
+ if (ring->irq_seqno_barrier)
+ ring->irq_seqno_barrier(ring);
+
ring->hangcheck.seqno = seqno;
}

@@ -2695,8 +2671,6 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
ring->irq_put = gen8_ring_put_irq;
ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
ring->irq_seqno_barrier = gen6_seqno_barrier;
- ring->get_seqno = ring_get_seqno;
- ring->set_seqno = ring_set_seqno;
if (i915_semaphore_is_enabled(dev)) {
WARN_ON(!dev_priv->semaphore_obj);
ring->semaphore.sync_to = gen8_ring_sync;
@@ -2713,8 +2687,6 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
ring->irq_put = gen6_ring_put_irq;
ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
ring->irq_seqno_barrier = gen6_seqno_barrier;
- ring->get_seqno = ring_get_seqno;
- ring->set_seqno = ring_set_seqno;
if (i915_semaphore_is_enabled(dev)) {
ring->semaphore.sync_to = gen6_ring_sync;
ring->semaphore.signal = gen6_signal;
@@ -2739,8 +2711,6 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
} else if (IS_GEN5(dev)) {
ring->add_request = pc_render_add_request;
ring->flush = gen4_render_ring_flush;
- ring->get_seqno = pc_render_get_seqno;
- ring->set_seqno = pc_render_set_seqno;
ring->irq_get = gen5_ring_get_irq;
ring->irq_put = gen5_ring_put_irq;
ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT |
@@ -2751,8 +2721,6 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
ring->flush = gen2_render_ring_flush;
else
ring->flush = gen4_render_ring_flush;
- ring->get_seqno = ring_get_seqno;
- ring->set_seqno = ring_set_seqno;
if (IS_GEN2(dev)) {
ring->irq_get = i8xx_ring_get_irq;
ring->irq_put = i8xx_ring_put_irq;
@@ -2828,8 +2796,6 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
ring->flush = gen6_bsd_ring_flush;
ring->add_request = gen6_add_request;
ring->irq_seqno_barrier = gen6_seqno_barrier;
- ring->get_seqno = ring_get_seqno;
- ring->set_seqno = ring_set_seqno;
if (INTEL_INFO(dev)->gen >= 8) {
ring->irq_enable_mask =
GT_RENDER_USER_INTERRUPT << GEN8_VCS1_IRQ_SHIFT;
@@ -2867,8 +2833,6 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
ring->mmio_base = BSD_RING_BASE;
ring->flush = bsd_ring_flush;
ring->add_request = i9xx_add_request;
- ring->get_seqno = ring_get_seqno;
- ring->set_seqno = ring_set_seqno;
if (IS_GEN5(dev)) {
ring->irq_enable_mask = ILK_BSD_USER_INTERRUPT;
ring->irq_get = gen5_ring_get_irq;
@@ -2901,8 +2865,6 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev)
ring->flush = gen6_bsd_ring_flush;
ring->add_request = gen6_add_request;
ring->irq_seqno_barrier = gen6_seqno_barrier;
- ring->get_seqno = ring_get_seqno;
- ring->set_seqno = ring_set_seqno;
ring->irq_enable_mask =
GT_RENDER_USER_INTERRUPT << GEN8_VCS2_IRQ_SHIFT;
ring->irq_get = gen8_ring_get_irq;
@@ -2932,8 +2894,6 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
ring->flush = gen6_ring_flush;
ring->add_request = gen6_add_request;
ring->irq_seqno_barrier = gen6_seqno_barrier;
- ring->get_seqno = ring_get_seqno;
- ring->set_seqno = ring_set_seqno;
if (INTEL_INFO(dev)->gen >= 8) {
ring->irq_enable_mask =
GT_RENDER_USER_INTERRUPT << GEN8_BCS_IRQ_SHIFT;
@@ -2990,8 +2950,6 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
ring->flush = gen6_ring_flush;
ring->add_request = gen6_add_request;
ring->irq_seqno_barrier = gen6_seqno_barrier;
- ring->get_seqno = ring_get_seqno;
- ring->set_seqno = ring_set_seqno;

if (INTEL_INFO(dev)->gen >= 8) {
ring->irq_enable_mask =
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 3b49726b1732..28ab07b38c05 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -220,9 +220,6 @@ struct intel_engine_cs {
* monotonic, even if not coherent.
*/
void (*irq_seqno_barrier)(struct intel_engine_cs *ring);
- u32 (*get_seqno)(struct intel_engine_cs *ring);
- void (*set_seqno)(struct intel_engine_cs *ring,
- u32 seqno);
int (*dispatch_execbuffer)(struct drm_i915_gem_request *req,
u64 offset, u32 length,
unsigned dispatch_flags);
@@ -502,6 +499,10 @@ int intel_init_blt_ring_buffer(struct drm_device *dev);
int intel_init_vebox_ring_buffer(struct drm_device *dev);

u64 intel_ring_get_active_head(struct intel_engine_cs *ring);
+static inline u32 intel_ring_get_seqno(struct intel_engine_cs *ring)
+{
+ return intel_read_status_page(ring, I915_GEM_HWS_INDEX);
+}

int init_workarounds_ring(struct intel_engine_cs *ring);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:38 UTC
Permalink
Avoid the two calls to ktime_get_raw_ns() (at best it reads the TSC) as
we only need to compute the elapsed time for a timed wait.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem.c | 13 +++++--------
1 file changed, 5 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index a0744626a110..b956b8813307 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1220,7 +1220,6 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
int state = interruptible ? TASK_INTERRUPTIBLE : TASK_UNINTERRUPTIBLE;
struct intel_wait wait;
unsigned long timeout_remain;
- s64 before, now;
int ret = 0;

might_sleep();
@@ -1239,13 +1238,12 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
if (*timeout == 0)
return -ETIME;

+ /* Record current time in case interrupted, or wedged */
timeout_remain = nsecs_to_jiffies_timeout(*timeout);
+ *timeout += ktime_get_raw_ns();
}

- /* Record current time in case interrupted by signal, or wedged */
trace_i915_gem_request_wait_begin(req);
- before = ktime_get_raw_ns();
-
if (INTEL_INFO(req->i915)->gen >= 6)
gen6_rps_boost(req->i915, rps, req->emitted_jiffies);

@@ -1298,13 +1296,12 @@ wakeup:
complete:
intel_engine_remove_wait(req->ring, &wait);
__set_task_state(wait.task, TASK_RUNNING);
- now = ktime_get_raw_ns();
trace_i915_gem_request_wait_end(req);

if (timeout) {
- s64 tres = *timeout - (now - before);
-
- *timeout = tres < 0 ? 0 : tres;
+ *timeout -= ktime_get_raw_ns();
+ if (*timeout < 0)
+ *timeout = 0;

/*
* Apparently ktime isn't accurate enough and occasionally has a
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:11 UTC
Permalink
Having ringbuf->ring point to an engine is confusing, so rename it once
again to ring->engine.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_guc_submission.c | 10 +++---
drivers/gpu/drm/i915/intel_lrc.c | 35 +++++++++----------
drivers/gpu/drm/i915/intel_ringbuffer.c | 54 +++++++++++++++---------------
drivers/gpu/drm/i915/intel_ringbuffer.h | 2 +-
4 files changed, 49 insertions(+), 52 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
index e82cc9182dfa..53abe2143f8a 100644
--- a/drivers/gpu/drm/i915/i915_guc_submission.c
+++ b/drivers/gpu/drm/i915/i915_guc_submission.c
@@ -391,7 +391,7 @@ static void guc_init_ctx_desc(struct intel_guc *guc,
for (i = 0; i < I915_NUM_RINGS; i++) {
struct guc_execlist_context *lrc = &desc.lrc[i];
struct intel_ringbuffer *ringbuf = ctx->engine[i].ringbuf;
- struct intel_engine_cs *ring;
+ struct intel_engine_cs *engine;
struct drm_i915_gem_object *obj;
uint64_t ctx_desc;

@@ -406,15 +406,15 @@ static void guc_init_ctx_desc(struct intel_guc *guc,
if (!obj)
break; /* XXX: continue? */

- ring = ringbuf->ring;
- ctx_desc = intel_lr_context_descriptor(ctx, ring);
+ engine = ringbuf->engine;
+ ctx_desc = intel_lr_context_descriptor(ctx, engine);
lrc->context_desc = (u32)ctx_desc;

/* The state page is after PPHWSP */
lrc->ring_lcra = i915_gem_obj_ggtt_offset(obj) +
LRC_STATE_PN * PAGE_SIZE;
lrc->context_id = (client->ctx_index << GUC_ELC_CTXID_OFFSET) |
- (ring->id << GUC_ELC_ENGINE_OFFSET);
+ (engine->id << GUC_ELC_ENGINE_OFFSET);

obj = ringbuf->obj;

@@ -423,7 +423,7 @@ static void guc_init_ctx_desc(struct intel_guc *guc,
lrc->ring_next_free_location = lrc->ring_begin;
lrc->ring_current_tail_pointer_value = 0;

- desc.engines_used |= (1 << ring->id);
+ desc.engines_used |= (1 << engine->id);
}

WARN_ON(desc.engines_used == 0);
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 87d325b6e7dc..8639ebfab96f 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -2179,13 +2179,13 @@ void intel_lr_context_free(struct intel_context *ctx)
if (ctx_obj) {
struct intel_ringbuffer *ringbuf =
ctx->engine[i].ringbuf;
- struct intel_engine_cs *ring = ringbuf->ring;
+ struct intel_engine_cs *engine = ringbuf->engine;

- if (ctx == ring->default_context) {
+ if (ctx == engine->default_context) {
intel_unpin_ringbuffer_obj(ringbuf);
i915_gem_object_ggtt_unpin(ctx_obj);
}
- WARN_ON(ctx->engine[ring->id].pin_count);
+ WARN_ON(ctx->engine[engine->id].pin_count);
intel_ringbuffer_free(ringbuf);
drm_gem_object_unreference(&ctx_obj->base);
}
@@ -2261,57 +2261,54 @@ static void lrc_setup_hardware_status_page(struct intel_engine_cs *ring,
*
* Return: non-zero on error.
*/
-
int intel_lr_context_deferred_alloc(struct intel_context *ctx,
- struct intel_engine_cs *ring)
+ struct intel_engine_cs *engine)
{
- struct drm_device *dev = ring->dev;
struct drm_i915_gem_object *ctx_obj;
uint32_t context_size;
struct intel_ringbuffer *ringbuf;
int ret;

WARN_ON(ctx->legacy_hw_ctx.rcs_state != NULL);
- WARN_ON(ctx->engine[ring->id].state);
+ WARN_ON(ctx->engine[engine->id].state);

- context_size = round_up(intel_lr_context_size(ring), 4096);
+ context_size = round_up(intel_lr_context_size(engine), 4096);

/* One extra page as the sharing data between driver and GuC */
context_size += PAGE_SIZE * LRC_PPHWSP_PN;

- ctx_obj = i915_gem_alloc_object(dev, context_size);
+ ctx_obj = i915_gem_alloc_object(engine->dev, context_size);
if (!ctx_obj) {
DRM_DEBUG_DRIVER("Alloc LRC backing obj failed.\n");
return -ENOMEM;
}

- ringbuf = intel_engine_create_ringbuffer(ring, 4 * PAGE_SIZE);
+ ringbuf = intel_engine_create_ringbuffer(engine, 4 * PAGE_SIZE);
if (IS_ERR(ringbuf)) {
ret = PTR_ERR(ringbuf);
goto error_deref_obj;
}

- ret = populate_lr_context(ctx, ctx_obj, ring, ringbuf);
+ ret = populate_lr_context(ctx, ctx_obj, engine, ringbuf);
if (ret) {
DRM_DEBUG_DRIVER("Failed to populate LRC: %d\n", ret);
goto error_ringbuf;
}

- ctx->engine[ring->id].ringbuf = ringbuf;
- ctx->engine[ring->id].state = ctx_obj;
+ ctx->engine[engine->id].ringbuf = ringbuf;
+ ctx->engine[engine->id].state = ctx_obj;

- if (ctx != ring->default_context && ring->init_context) {
+ if (ctx != engine->default_context && engine->init_context) {
struct drm_i915_gem_request *req;

- ret = i915_gem_request_alloc(ring,
- ctx, &req);
+ ret = i915_gem_request_alloc(engine, ctx, &req);
if (ret) {
DRM_ERROR("ring create req: %d\n",
ret);
goto error_ringbuf;
}

- ret = ring->init_context(req);
+ ret = engine->init_context(req);
if (ret) {
DRM_ERROR("ring init context: %d\n",
ret);
@@ -2326,8 +2323,8 @@ error_ringbuf:
intel_ringbuffer_free(ringbuf);
error_deref_obj:
drm_gem_object_unreference(&ctx_obj->base);
- ctx->engine[ring->id].ringbuf = NULL;
- ctx->engine[ring->id].state = NULL;
+ ctx->engine[engine->id].ringbuf = NULL;
+ ctx->engine[engine->id].state = NULL;
return ret;
}

diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index ae00e79c9c99..c437b61ac1d0 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1940,7 +1940,7 @@ intel_engine_create_ringbuffer(struct intel_engine_cs *engine, int size)
return ERR_PTR(-ENOMEM);
}

- ring->ring = engine;
+ ring->engine = engine;
list_add(&ring->link, &engine->buffers);

ring->size = size;
@@ -1975,40 +1975,40 @@ intel_ringbuffer_free(struct intel_ringbuffer *ring)
kfree(ring);
}

-static int intel_init_ring_buffer(struct drm_device *dev,
- struct intel_engine_cs *ring)
+static int intel_init_engine(struct drm_device *dev,
+ struct intel_engine_cs *engine)
{
struct intel_ringbuffer *ringbuf;
int ret;

- WARN_ON(ring->buffer);
+ WARN_ON(engine->buffer);

- ring->dev = dev;
- ring->i915 = to_i915(dev);
- ring->fence_context = fence_context_alloc(1);
- INIT_LIST_HEAD(&ring->active_list);
- INIT_LIST_HEAD(&ring->request_list);
- INIT_LIST_HEAD(&ring->execlist_queue);
- INIT_LIST_HEAD(&ring->buffers);
- i915_gem_batch_pool_init(dev, &ring->batch_pool);
- memset(ring->semaphore.sync_seqno, 0, sizeof(ring->semaphore.sync_seqno));
+ engine->dev = dev;
+ engine->i915 = to_i915(dev);
+ engine->fence_context = fence_context_alloc(1);
+ INIT_LIST_HEAD(&engine->active_list);
+ INIT_LIST_HEAD(&engine->request_list);
+ INIT_LIST_HEAD(&engine->execlist_queue);
+ INIT_LIST_HEAD(&engine->buffers);
+ i915_gem_batch_pool_init(dev, &engine->batch_pool);
+ memset(engine->semaphore.sync_seqno, 0, sizeof(engine->semaphore.sync_seqno));

- intel_engine_init_breadcrumbs(ring);
+ intel_engine_init_breadcrumbs(engine);

- ringbuf = intel_engine_create_ringbuffer(ring, 32 * PAGE_SIZE);
+ ringbuf = intel_engine_create_ringbuffer(engine, 32 * PAGE_SIZE);
if (IS_ERR(ringbuf)) {
ret = PTR_ERR(ringbuf);
goto error;
}
- ring->buffer = ringbuf;
+ engine->buffer = ringbuf;

if (I915_NEED_GFX_HWS(dev)) {
- ret = init_status_page(ring);
+ ret = init_status_page(engine);
if (ret)
goto error;
} else {
- BUG_ON(ring->id != RCS);
- ret = init_phys_status_page(ring);
+ BUG_ON(engine->id != RCS);
+ ret = init_phys_status_page(engine);
if (ret)
goto error;
}
@@ -2016,19 +2016,19 @@ static int intel_init_ring_buffer(struct drm_device *dev,
ret = intel_pin_and_map_ringbuffer_obj(dev, ringbuf);
if (ret) {
DRM_ERROR("Failed to pin and map ringbuffer %s: %d\n",
- ring->name, ret);
+ engine->name, ret);
intel_destroy_ringbuffer_obj(ringbuf);
goto error;
}

- ret = i915_cmd_parser_init_ring(ring);
+ ret = i915_cmd_parser_init_ring(engine);
if (ret)
goto error;

return 0;

error:
- intel_cleanup_ring_buffer(ring);
+ intel_cleanup_ring_buffer(engine);
return ret;
}

@@ -2612,7 +2612,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
ring->scratch.gtt_offset = i915_gem_obj_ggtt_offset(obj);
}

- ret = intel_init_ring_buffer(dev, ring);
+ ret = intel_init_engine(dev, ring);
if (ret)
return ret;

@@ -2692,7 +2692,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
}
ring->init_hw = init_ring_common;

- return intel_init_ring_buffer(dev, ring);
+ return intel_init_engine(dev, ring);
}

/**
@@ -2724,7 +2724,7 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev)
}
ring->init_hw = init_ring_common;

- return intel_init_ring_buffer(dev, ring);
+ return intel_init_engine(dev, ring);
}

int intel_init_blt_ring_buffer(struct drm_device *dev)
@@ -2780,7 +2780,7 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
}
ring->init_hw = init_ring_common;

- return intel_init_ring_buffer(dev, ring);
+ return intel_init_engine(dev, ring);
}

int intel_init_vebox_ring_buffer(struct drm_device *dev)
@@ -2830,7 +2830,7 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
}
ring->init_hw = init_ring_common;

- return intel_init_ring_buffer(dev, ring);
+ return intel_init_engine(dev, ring);
}

int
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index bc6ceb54b1f3..6bd9b356c95d 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -101,7 +101,7 @@ struct intel_ringbuffer {
struct drm_i915_gem_object *obj;
void *virtual_start;

- struct intel_engine_cs *ring;
+ struct intel_engine_cs *engine;
struct list_head link;

u32 head;
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:07 UTC
Permalink
Combine the near identical implementations of intel_logical_ring_begin()
and intel_ring_begin() - the only difference is that the logical wait
has to check for a matching ring (which is assumed by legacy).

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/intel_lrc.c | 141 ++------------------------------
drivers/gpu/drm/i915/intel_lrc.h | 1 -
drivers/gpu/drm/i915/intel_mocs.c | 12 +--
drivers/gpu/drm/i915/intel_ringbuffer.c | 111 +++++++++++++------------
4 files changed, 69 insertions(+), 196 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index dc4fc9d8612c..3d14b69632e8 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -698,48 +698,6 @@ int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request
return 0;
}

-static int logical_ring_wait_for_space(struct drm_i915_gem_request *req,
- int bytes)
-{
- struct intel_ringbuffer *ringbuf = req->ringbuf;
- struct intel_engine_cs *ring = req->ring;
- struct drm_i915_gem_request *target;
- unsigned space;
- int ret;
-
- if (intel_ring_space(ringbuf) >= bytes)
- return 0;
-
- /* The whole point of reserving space is to not wait! */
- WARN_ON(ringbuf->reserved_in_use);
-
- list_for_each_entry(target, &ring->request_list, list) {
- /*
- * The request queue is per-engine, so can contain requests
- * from multiple ringbuffers. Here, we must ignore any that
- * aren't from the ringbuffer we're considering.
- */
- if (target->ringbuf != ringbuf)
- continue;
-
- /* Would completion of this request free enough space? */
- space = __intel_ring_space(target->postfix, ringbuf->tail,
- ringbuf->size);
- if (space >= bytes)
- break;
- }
-
- if (WARN_ON(&target->list == &ring->request_list))
- return -ENOSPC;
-
- ret = i915_wait_request(target);
- if (ret)
- return ret;
-
- ringbuf->space = space;
- return 0;
-}
-
/*
* intel_logical_ring_advance_and_submit() - advance the tail and submit the workload
* @request: Request to advance the logical ringbuffer of.
@@ -763,89 +721,6 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
execlists_context_queue(request);
}

-static void __wrap_ring_buffer(struct intel_ringbuffer *ringbuf)
-{
- int rem = ringbuf->size - ringbuf->tail;
- memset(ringbuf->virtual_start + ringbuf->tail, 0, rem);
-
- ringbuf->tail = 0;
- intel_ring_update_space(ringbuf);
-}
-
-static int logical_ring_prepare(struct drm_i915_gem_request *req, int bytes)
-{
- struct intel_ringbuffer *ringbuf = req->ringbuf;
- int remain_usable = ringbuf->effective_size - ringbuf->tail;
- int remain_actual = ringbuf->size - ringbuf->tail;
- int ret, total_bytes, wait_bytes = 0;
- bool need_wrap = false;
-
- if (ringbuf->reserved_in_use)
- total_bytes = bytes;
- else
- total_bytes = bytes + ringbuf->reserved_size;
-
- if (unlikely(bytes > remain_usable)) {
- /*
- * Not enough space for the basic request. So need to flush
- * out the remainder and then wait for base + reserved.
- */
- wait_bytes = remain_actual + total_bytes;
- need_wrap = true;
- } else {
- if (unlikely(total_bytes > remain_usable)) {
- /*
- * The base request will fit but the reserved space
- * falls off the end. So only need to to wait for the
- * reserved size after flushing out the remainder.
- */
- wait_bytes = remain_actual + ringbuf->reserved_size;
- need_wrap = true;
- } else if (total_bytes > ringbuf->space) {
- /* No wrapping required, just waiting. */
- wait_bytes = total_bytes;
- }
- }
-
- if (wait_bytes) {
- ret = logical_ring_wait_for_space(req, wait_bytes);
- if (unlikely(ret))
- return ret;
-
- if (need_wrap)
- __wrap_ring_buffer(ringbuf);
- }
-
- return 0;
-}
-
-/**
- * intel_logical_ring_begin() - prepare the logical ringbuffer to accept some commands
- *
- * @req: The request to start some new work for
- * @num_dwords: number of DWORDs that we plan to write to the ringbuffer.
- *
- * The ringbuffer might not be ready to accept the commands right away (maybe it needs to
- * be wrapped, or wait a bit for the tail to be updated). This function takes care of that
- * and also preallocates a request (every workload submission is still mediated through
- * requests, same as it did with legacy ringbuffer submission).
- *
- * Return: non-zero if the ringbuffer is not ready to be written to.
- */
-int intel_logical_ring_begin(struct drm_i915_gem_request *req, int num_dwords)
-{
- int ret;
-
- WARN_ON(req == NULL);
-
- ret = logical_ring_prepare(req, num_dwords * sizeof(uint32_t));
- if (ret)
- return ret;
-
- req->ringbuf->space -= num_dwords * sizeof(uint32_t);
- return 0;
-}
-
int intel_logical_ring_reserve_space(struct drm_i915_gem_request *request)
{
/*
@@ -858,7 +733,7 @@ int intel_logical_ring_reserve_space(struct drm_i915_gem_request *request)
*/
intel_ring_reserved_space_reserve(request->ringbuf, MIN_SPACE_FOR_ADD_REQUEST);

- return intel_logical_ring_begin(request, 0);
+ return intel_ring_begin(request, 0);
}

/**
@@ -928,7 +803,7 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,

if (ring == &dev_priv->ring[RCS] &&
instp_mode != dev_priv->relative_constants_mode) {
- ret = intel_logical_ring_begin(params->request, 4);
+ ret = intel_ring_begin(params->request, 4);
if (ret)
return ret;

@@ -1104,7 +979,7 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
if (ret)
return ret;

- ret = intel_logical_ring_begin(req, w->count * 2 + 2);
+ ret = intel_ring_begin(req, w->count * 2 + 2);
if (ret)
return ret;

@@ -1566,7 +1441,7 @@ static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
const int num_lri_cmds = GEN8_LEGACY_PDPES * 2;
int i, ret;

- ret = intel_logical_ring_begin(req, num_lri_cmds * 2 + 2);
+ ret = intel_ring_begin(req, num_lri_cmds * 2 + 2);
if (ret)
return ret;

@@ -1611,7 +1486,7 @@ static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
req->ctx->ppgtt->pd_dirty_rings &= ~intel_ring_flag(req->ring);
}

- ret = intel_logical_ring_begin(req, 4);
+ ret = intel_ring_begin(req, 4);
if (ret)
return ret;

@@ -1655,7 +1530,7 @@ static int gen8_emit_flush(struct drm_i915_gem_request *request,
uint32_t cmd;
int ret;

- ret = intel_logical_ring_begin(request, 4);
+ ret = intel_ring_begin(request, 4);
if (ret)
return ret;

@@ -1722,7 +1597,7 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
vf_flush_wa = true;
}

- ret = intel_logical_ring_begin(request, vf_flush_wa ? 12 : 6);
+ ret = intel_ring_begin(request, vf_flush_wa ? 12 : 6);
if (ret)
return ret;

@@ -1779,7 +1654,7 @@ static int gen8_emit_request(struct drm_i915_gem_request *request)
* used as a workaround for not being allowed to do lite
* restore with HEAD==TAIL (WaIdleLiteRestore).
*/
- ret = intel_logical_ring_begin(request, 8);
+ ret = intel_ring_begin(request, 8);
if (ret)
return ret;

diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index 9d4aa699e593..32401e11cebe 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -60,7 +60,6 @@ int intel_logical_ring_reserve_space(struct drm_i915_gem_request *request);
void intel_logical_ring_stop(struct intel_engine_cs *ring);
void intel_logical_ring_cleanup(struct intel_engine_cs *ring);
int intel_logical_rings_init(struct drm_device *dev);
-int intel_logical_ring_begin(struct drm_i915_gem_request *req, int num_dwords);

int logical_ring_flush_all_caches(struct drm_i915_gem_request *req);

diff --git a/drivers/gpu/drm/i915/intel_mocs.c b/drivers/gpu/drm/i915/intel_mocs.c
index d8a7fdc7baeb..5d4f6f3b67cd 100644
--- a/drivers/gpu/drm/i915/intel_mocs.c
+++ b/drivers/gpu/drm/i915/intel_mocs.c
@@ -200,11 +200,9 @@ static int emit_mocs_control_table(struct drm_i915_gem_request *req,
if (WARN_ON(table->size > GEN9_NUM_MOCS_ENTRIES))
return -ENODEV;

- ret = intel_logical_ring_begin(req, 2 + 2 * GEN9_NUM_MOCS_ENTRIES);
- if (ret) {
- DRM_DEBUG("intel_logical_ring_begin failed %d\n", ret);
+ ret = intel_ring_begin(req, 2 + 2 * GEN9_NUM_MOCS_ENTRIES);
+ if (ret)
return ret;
- }

intel_ring_emit(ringbuf, MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES));

@@ -257,11 +255,9 @@ static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
if (WARN_ON(table->size > GEN9_NUM_MOCS_ENTRIES))
return -ENODEV;

- ret = intel_logical_ring_begin(req, 2 + GEN9_NUM_MOCS_ENTRIES);
- if (ret) {
- DRM_DEBUG("intel_logical_ring_begin failed %d\n", ret);
+ ret = intel_ring_begin(req, 2 + GEN9_NUM_MOCS_ENTRIES);
+ if (ret)
return ret;
- }

intel_ring_emit(ringbuf,
MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES / 2));
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 86c54584f64a..c694f602a0b8 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -2062,46 +2062,6 @@ void intel_cleanup_ring_buffer(struct intel_engine_cs *ring)
ring->dev = NULL;
}

-static int ring_wait_for_space(struct intel_engine_cs *ring, int n)
-{
- struct intel_ringbuffer *ringbuf = ring->buffer;
- struct drm_i915_gem_request *request;
- unsigned space;
- int ret;
-
- if (intel_ring_space(ringbuf) >= n)
- return 0;
-
- /* The whole point of reserving space is to not wait! */
- WARN_ON(ringbuf->reserved_in_use);
-
- list_for_each_entry(request, &ring->request_list, list) {
- space = __intel_ring_space(request->postfix, ringbuf->tail,
- ringbuf->size);
- if (space >= n)
- break;
- }
-
- if (WARN_ON(&request->list == &ring->request_list))
- return -ENOSPC;
-
- ret = i915_wait_request(request);
- if (ret)
- return ret;
-
- ringbuf->space = space;
- return 0;
-}
-
-static void __wrap_ring_buffer(struct intel_ringbuffer *ringbuf)
-{
- int rem = ringbuf->size - ringbuf->tail;
- memset(ringbuf->virtual_start + ringbuf->tail, 0, rem);
-
- ringbuf->tail = 0;
- intel_ring_update_space(ringbuf);
-}
-
int intel_ring_idle(struct intel_engine_cs *ring)
{
struct drm_i915_gem_request *req;
@@ -2188,9 +2148,59 @@ void intel_ring_reserved_space_end(struct intel_ringbuffer *ringbuf)
ringbuf->reserved_in_use = false;
}

-static int __intel_ring_prepare(struct intel_engine_cs *ring, int bytes)
+static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
{
- struct intel_ringbuffer *ringbuf = ring->buffer;
+ struct intel_ringbuffer *ringbuf = req->ringbuf;
+ struct intel_engine_cs *ring = req->ring;
+ struct drm_i915_gem_request *target;
+ unsigned space;
+ int ret;
+
+ if (intel_ring_space(ringbuf) >= bytes)
+ return 0;
+
+ /* The whole point of reserving space is to not wait! */
+ WARN_ON(ringbuf->reserved_in_use);
+
+ list_for_each_entry(target, &ring->request_list, list) {
+ /*
+ * The request queue is per-engine, so can contain requests
+ * from multiple ringbuffers. Here, we must ignore any that
+ * aren't from the ringbuffer we're considering.
+ */
+ if (target->ringbuf != ringbuf)
+ continue;
+
+ /* Would completion of this request free enough space? */
+ space = __intel_ring_space(target->postfix, ringbuf->tail,
+ ringbuf->size);
+ if (space >= bytes)
+ break;
+ }
+
+ if (WARN_ON(&target->list == &ring->request_list))
+ return -ENOSPC;
+
+ ret = i915_wait_request(target);
+ if (ret)
+ return ret;
+
+ ringbuf->space = space;
+ return 0;
+}
+
+static void ring_wrap(struct intel_ringbuffer *ringbuf)
+{
+ int rem = ringbuf->size - ringbuf->tail;
+ memset(ringbuf->virtual_start + ringbuf->tail, 0, rem);
+
+ ringbuf->tail = 0;
+ intel_ring_update_space(ringbuf);
+}
+
+static int ring_prepare(struct drm_i915_gem_request *req, int bytes)
+{
+ struct intel_ringbuffer *ringbuf = req->ringbuf;
int remain_usable = ringbuf->effective_size - ringbuf->tail;
int remain_actual = ringbuf->size - ringbuf->tail;
int ret, total_bytes, wait_bytes = 0;
@@ -2224,33 +2234,26 @@ static int __intel_ring_prepare(struct intel_engine_cs *ring, int bytes)
}

if (wait_bytes) {
- ret = ring_wait_for_space(ring, wait_bytes);
+ ret = wait_for_space(req, wait_bytes);
if (unlikely(ret))
return ret;

if (need_wrap)
- __wrap_ring_buffer(ringbuf);
+ ring_wrap(ringbuf);
}

return 0;
}

-int intel_ring_begin(struct drm_i915_gem_request *req,
- int num_dwords)
+int intel_ring_begin(struct drm_i915_gem_request *req, int num_dwords)
{
- struct intel_engine_cs *ring;
- struct drm_i915_private *dev_priv;
int ret;

- WARN_ON(req == NULL);
- ring = req->ring;
- dev_priv = req->i915;
-
- ret = __intel_ring_prepare(ring, num_dwords * sizeof(uint32_t));
+ ret = ring_prepare(req, num_dwords * sizeof(uint32_t));
if (ret)
return ret;

- ring->buffer->space -= num_dwords * sizeof(uint32_t);
+ req->ringbuf->space -= num_dwords * sizeof(uint32_t);
return 0;
}
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:18 UTC
Permalink
Both the ->dispatch_execbuffer and ->emit_bb_start callbacks do exactly
the same thing, add MI_BATCHBUFFER_START to the request's ringbuffer -
we need only one vfunc.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 6 +--
drivers/gpu/drm/i915/i915_gem_render_state.c | 16 +++----
drivers/gpu/drm/i915/intel_lrc.c | 9 +++-
drivers/gpu/drm/i915/intel_ringbuffer.c | 67 +++++++++++++---------------
drivers/gpu/drm/i915/intel_ringbuffer.h | 12 +++--
5 files changed, 55 insertions(+), 55 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 3956d74d8c8c..3e6384deca65 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1297,9 +1297,9 @@ i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
exec_start = params->batch_obj_vm_offset +
params->args_batch_start_offset;

- ret = params->ring->dispatch_execbuffer(params->request,
- exec_start, exec_len,
- params->dispatch_flags);
+ ret = params->ring->emit_bb_start(params->request,
+ exec_start, exec_len,
+ params->dispatch_flags);
if (ret)
return ret;

diff --git a/drivers/gpu/drm/i915/i915_gem_render_state.c b/drivers/gpu/drm/i915/i915_gem_render_state.c
index bee3f0ccd0cd..ccc988c2b226 100644
--- a/drivers/gpu/drm/i915/i915_gem_render_state.c
+++ b/drivers/gpu/drm/i915/i915_gem_render_state.c
@@ -205,18 +205,18 @@ int i915_gem_render_state_init(struct drm_i915_gem_request *req)
if (so.rodata == NULL)
return 0;

- ret = req->engine->dispatch_execbuffer(req, so.ggtt_offset,
- so.rodata->batch_items * 4,
- I915_DISPATCH_SECURE);
+ ret = req->engine->emit_bb_start(req, so.ggtt_offset,
+ so.rodata->batch_items * 4,
+ I915_DISPATCH_SECURE);
if (ret)
goto out;

if (so.aux_batch_size > 8) {
- ret = req->engine->dispatch_execbuffer(req,
- (so.ggtt_offset +
- so.aux_batch_offset),
- so.aux_batch_size,
- I915_DISPATCH_SECURE);
+ ret = req->engine->emit_bb_start(req,
+ (so.ggtt_offset +
+ so.aux_batch_offset),
+ so.aux_batch_size,
+ I915_DISPATCH_SECURE);
if (ret)
goto out;
}
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 82b21a883732..30effca91184 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -783,7 +783,9 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
exec_start = params->batch_obj_vm_offset +
args->batch_start_offset;

- ret = engine->emit_bb_start(params->request, exec_start, params->dispatch_flags);
+ ret = engine->emit_bb_start(params->request,
+ exec_start, args->batch_len,
+ params->dispatch_flags);
if (ret)
return ret;

@@ -1409,7 +1411,8 @@ static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
}

static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
- u64 offset, unsigned dispatch_flags)
+ u64 offset, u32 len,
+ unsigned dispatch_flags)
{
struct intel_ring *ring = req->ring;
bool ppgtt = !(dispatch_flags & I915_DISPATCH_SECURE);
@@ -1637,12 +1640,14 @@ static int intel_lr_context_render_state_init(struct drm_i915_gem_request *req)
return 0;

ret = req->engine->emit_bb_start(req, so.ggtt_offset,
+ so.rodata->batch_items * 4,
I915_DISPATCH_SECURE);
if (ret)
goto out;

ret = req->engine->emit_bb_start(req,
(so.ggtt_offset + so.aux_batch_offset),
+ so.aux_batch_size,
I915_DISPATCH_SECURE);
if (ret)
goto out;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index e584b0f631f8..04f0a77d49cf 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1656,9 +1656,9 @@ gen8_ring_disable_irq(struct intel_engine_cs *ring)
}

static int
-i965_dispatch_execbuffer(struct drm_i915_gem_request *req,
- u64 offset, u32 length,
- unsigned dispatch_flags)
+i965_emit_bb_start(struct drm_i915_gem_request *req,
+ u64 offset, u32 length,
+ unsigned dispatch_flags)
{
struct intel_ring *ring = req->ring;
int ret;
@@ -1683,9 +1683,9 @@ i965_dispatch_execbuffer(struct drm_i915_gem_request *req,
#define I830_TLB_ENTRIES (2)
#define I830_WA_SIZE max(I830_TLB_ENTRIES*4096, I830_BATCH_LIMIT)
static int
-i830_dispatch_execbuffer(struct drm_i915_gem_request *req,
- u64 offset, u32 len,
- unsigned dispatch_flags)
+i830_emit_bb_start(struct drm_i915_gem_request *req,
+ u64 offset, u32 len,
+ unsigned dispatch_flags)
{
struct intel_ring *ring = req->ring;
u32 cs_offset = req->engine->scratch.gtt_offset;
@@ -1746,9 +1746,9 @@ i830_dispatch_execbuffer(struct drm_i915_gem_request *req,
}

static int
-i915_dispatch_execbuffer(struct drm_i915_gem_request *req,
- u64 offset, u32 len,
- unsigned dispatch_flags)
+i915_emit_bb_start(struct drm_i915_gem_request *req,
+ u64 offset, u32 len,
+ unsigned dispatch_flags)
{
struct intel_ring *ring = req->ring;
int ret;
@@ -2361,9 +2361,9 @@ static int gen6_bsd_ring_flush(struct drm_i915_gem_request *req,
}

static int
-gen8_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
- u64 offset, u32 len,
- unsigned dispatch_flags)
+gen8_emit_bb_start(struct drm_i915_gem_request *req,
+ u64 offset, u32 len,
+ unsigned dispatch_flags)
{
struct intel_ring *ring = req->ring;
bool ppgtt = USES_PPGTT(req->i915) &&
@@ -2387,9 +2387,9 @@ gen8_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
}

static int
-hsw_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
- u64 offset, u32 len,
- unsigned dispatch_flags)
+hsw_emit_bb_start(struct drm_i915_gem_request *req,
+ u64 offset, u32 len,
+ unsigned dispatch_flags)
{
struct intel_ring *ring = req->ring;
int ret;
@@ -2412,9 +2412,9 @@ hsw_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
}

static int
-gen6_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
- u64 offset, u32 len,
- unsigned dispatch_flags)
+gen6_emit_bb_start(struct drm_i915_gem_request *req,
+ u64 offset, u32 len,
+ unsigned dispatch_flags)
{
struct intel_ring *ring = req->ring;
int ret;
@@ -2578,17 +2578,17 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
ring->write_tail = ring_write_tail;

if (IS_HASWELL(dev))
- ring->dispatch_execbuffer = hsw_ring_dispatch_execbuffer;
+ ring->emit_bb_start = hsw_emit_bb_start;
else if (IS_GEN8(dev))
- ring->dispatch_execbuffer = gen8_ring_dispatch_execbuffer;
+ ring->emit_bb_start = gen8_emit_bb_start;
else if (INTEL_INFO(dev)->gen >= 6)
- ring->dispatch_execbuffer = gen6_ring_dispatch_execbuffer;
+ ring->emit_bb_start = gen6_emit_bb_start;
else if (INTEL_INFO(dev)->gen >= 4)
- ring->dispatch_execbuffer = i965_dispatch_execbuffer;
+ ring->emit_bb_start = i965_emit_bb_start;
else if (IS_I830(dev) || IS_845G(dev))
- ring->dispatch_execbuffer = i830_dispatch_execbuffer;
+ ring->emit_bb_start = i830_emit_bb_start;
else
- ring->dispatch_execbuffer = i915_dispatch_execbuffer;
+ ring->emit_bb_start = i915_emit_bb_start;
ring->init_hw = init_render_ring;
ring->cleanup = render_ring_cleanup;

@@ -2646,8 +2646,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
GT_RENDER_USER_INTERRUPT << GEN8_VCS1_IRQ_SHIFT;
ring->irq_enable = gen8_ring_enable_irq;
ring->irq_disable = gen8_ring_disable_irq;
- ring->dispatch_execbuffer =
- gen8_ring_dispatch_execbuffer;
+ ring->emit_bb_start = gen8_emit_bb_start;
if (i915.semaphores) {
ring->semaphore.sync_to = gen8_ring_sync;
ring->semaphore.signal = gen8_xcs_signal;
@@ -2657,8 +2656,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
ring->irq_enable_mask = GT_BSD_USER_INTERRUPT;
ring->irq_enable = gen6_ring_enable_irq;
ring->irq_disable = gen6_ring_disable_irq;
- ring->dispatch_execbuffer =
- gen6_ring_dispatch_execbuffer;
+ ring->emit_bb_start = gen6_emit_bb_start;
if (i915.semaphores) {
ring->semaphore.sync_to = gen6_ring_sync;
ring->semaphore.signal = gen6_signal;
@@ -2687,7 +2685,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
ring->irq_enable = i9xx_ring_enable_irq;
ring->irq_disable = i9xx_ring_disable_irq;
}
- ring->dispatch_execbuffer = i965_dispatch_execbuffer;
+ ring->emit_bb_start = i965_emit_bb_start;
}
ring->init_hw = init_ring_common;

@@ -2714,8 +2712,7 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev)
GT_RENDER_USER_INTERRUPT << GEN8_VCS2_IRQ_SHIFT;
ring->irq_enable = gen8_ring_enable_irq;
ring->irq_disable = gen8_ring_disable_irq;
- ring->dispatch_execbuffer =
- gen8_ring_dispatch_execbuffer;
+ ring->emit_bb_start = gen8_emit_bb_start;
if (i915.semaphores) {
ring->semaphore.sync_to = gen8_ring_sync;
ring->semaphore.signal = gen8_xcs_signal;
@@ -2744,7 +2741,7 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
GT_RENDER_USER_INTERRUPT << GEN8_BCS_IRQ_SHIFT;
ring->irq_enable = gen8_ring_enable_irq;
ring->irq_disable = gen8_ring_disable_irq;
- ring->dispatch_execbuffer = gen8_ring_dispatch_execbuffer;
+ ring->emit_bb_start = gen8_emit_bb_start;
if (i915.semaphores) {
ring->semaphore.sync_to = gen8_ring_sync;
ring->semaphore.signal = gen8_xcs_signal;
@@ -2754,7 +2751,7 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
ring->irq_enable_mask = GT_BLT_USER_INTERRUPT;
ring->irq_enable = gen6_ring_enable_irq;
ring->irq_disable = gen6_ring_disable_irq;
- ring->dispatch_execbuffer = gen6_ring_dispatch_execbuffer;
+ ring->emit_bb_start = gen6_emit_bb_start;
if (i915.semaphores) {
ring->semaphore.signal = gen6_signal;
ring->semaphore.sync_to = gen6_ring_sync;
@@ -2801,7 +2798,7 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
GT_RENDER_USER_INTERRUPT << GEN8_VECS_IRQ_SHIFT;
ring->irq_enable = gen8_ring_enable_irq;
ring->irq_disable = gen8_ring_disable_irq;
- ring->dispatch_execbuffer = gen8_ring_dispatch_execbuffer;
+ ring->emit_bb_start = gen8_emit_bb_start;
if (i915.semaphores) {
ring->semaphore.sync_to = gen8_ring_sync;
ring->semaphore.signal = gen8_xcs_signal;
@@ -2811,7 +2808,7 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
ring->irq_enable_mask = PM_VEBOX_USER_INTERRUPT;
ring->irq_enable = hsw_vebox_enable_irq;
ring->irq_disable = hsw_vebox_disable_irq;
- ring->dispatch_execbuffer = gen6_ring_dispatch_execbuffer;
+ ring->emit_bb_start = gen6_emit_bb_start;
if (i915.semaphores) {
ring->semaphore.sync_to = gen6_ring_sync;
ring->semaphore.signal = gen6_signal;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index fdeadae726b8..3a10376b896f 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -223,12 +223,6 @@ struct intel_engine_cs {
* monotonic, even if not coherent.
*/
void (*irq_seqno_barrier)(struct intel_engine_cs *ring);
- int (*dispatch_execbuffer)(struct drm_i915_gem_request *req,
- u64 offset, u32 length,
- unsigned dispatch_flags);
-#define I915_DISPATCH_SECURE 0x1
-#define I915_DISPATCH_PINNED 0x2
-#define I915_DISPATCH_RS 0x4
void (*cleanup)(struct intel_engine_cs *ring);

/* GEN8 signal/wait table - never trust comments!
@@ -301,7 +295,11 @@ struct intel_engine_cs {
u32 invalidate_domains,
u32 flush_domains);
int (*emit_bb_start)(struct drm_i915_gem_request *req,
- u64 offset, unsigned dispatch_flags);
+ u64 offset, u32 length,
+ unsigned dispatch_flags);
+#define I915_DISPATCH_SECURE 0x1
+#define I915_DISPATCH_PINNED 0x2
+#define I915_DISPATCH_RS 0x4

/**
* List of objects currently involved in rendering from the
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:49 UTC
Permalink
Make sure that the RPS bottom-half is flushed before we set the idle
frequency when we decide the GPU is idle. This should prevent any races
with the bottom-half and setting the idle frequency, and ensures that
the bottom-half is bounded by the GPU's rpm reference taken for when it
is active (i.e. between gen6_rps_busy() and gen6_rps_idle()).

v2: Avoid recursively using the i915->wq - RPS does not touch the
struct_mutex so has no place being on the ordered i915->wq.
v3: Enable/disable interrupts for RPS busy/idle in order to prevent
further HW access from RPS outside of the wakeref.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: Imre Deak <***@intel.com>
Cc: Jesse Barnes <***@virtuousgeek.org>
---
drivers/gpu/drm/i915/i915_drv.c | 1 -
drivers/gpu/drm/i915/i915_irq.c | 45 +++++++++++++++---------------------
drivers/gpu/drm/i915/intel_display.c | 1 +
drivers/gpu/drm/i915/intel_drv.h | 6 ++---
drivers/gpu/drm/i915/intel_pm.c | 23 +++++++++---------
5 files changed, 34 insertions(+), 42 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 4c090f1cf69c..442e1217e442 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -1492,7 +1492,6 @@ static int intel_runtime_suspend(struct device *device)

intel_guc_suspend(dev);

- intel_suspend_gt_powersave(dev);
intel_runtime_pm_disable_interrupts(dev_priv);

ret = intel_suspend_complete(dev_priv);
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 8866e981bcba..d9757d227c86 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -336,9 +336,8 @@ void gen6_disable_pm_irq(struct drm_i915_private *dev_priv, uint32_t mask)
__gen6_disable_pm_irq(dev_priv, mask);
}

-void gen6_reset_rps_interrupts(struct drm_device *dev)
+void gen6_reset_rps_interrupts(struct drm_i915_private *dev_priv)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
i915_reg_t reg = gen6_pm_iir(dev_priv);

spin_lock_irq(&dev_priv->irq_lock);
@@ -349,14 +348,14 @@ void gen6_reset_rps_interrupts(struct drm_device *dev)
spin_unlock_irq(&dev_priv->irq_lock);
}

-void gen6_enable_rps_interrupts(struct drm_device *dev)
+void gen6_enable_rps_interrupts(struct drm_i915_private *dev_priv)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ if (dev_priv->rps.interrupts_enabled)
+ return;

spin_lock_irq(&dev_priv->irq_lock);
-
- WARN_ON(dev_priv->rps.pm_iir);
- WARN_ON(I915_READ(gen6_pm_iir(dev_priv)) & dev_priv->pm_rps_events);
+ WARN_ON_ONCE(dev_priv->rps.pm_iir);
+ WARN_ON_ONCE(I915_READ(gen6_pm_iir(dev_priv)) & dev_priv->pm_rps_events);
dev_priv->rps.interrupts_enabled = true;
I915_WRITE(gen6_pm_ier(dev_priv), I915_READ(gen6_pm_ier(dev_priv)) |
dev_priv->pm_rps_events);
@@ -382,17 +381,13 @@ u32 gen6_sanitize_rps_pm_mask(struct drm_i915_private *dev_priv, u32 mask)
return mask;
}

-void gen6_disable_rps_interrupts(struct drm_device *dev)
+void gen6_disable_rps_interrupts(struct drm_i915_private *dev_priv)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
+ if (!dev_priv->rps.interrupts_enabled)
+ return;

spin_lock_irq(&dev_priv->irq_lock);
dev_priv->rps.interrupts_enabled = false;
- spin_unlock_irq(&dev_priv->irq_lock);
-
- cancel_work_sync(&dev_priv->rps.work);
-
- spin_lock_irq(&dev_priv->irq_lock);

I915_WRITE(GEN6_PMINTRMSK, gen6_sanitize_rps_pm_mask(dev_priv, ~0));

@@ -401,8 +396,15 @@ void gen6_disable_rps_interrupts(struct drm_device *dev)
~dev_priv->pm_rps_events);

spin_unlock_irq(&dev_priv->irq_lock);
+ synchronize_irq(dev_priv->dev->irq);

- synchronize_irq(dev->irq);
+ /* Now that we will not be generating any more work, flush any
+ * outsanding tasks. As we are called on the RPS idle path,
+ * we will reset the GPU to minimum frequencies, so the current
+ * state of the worker can be discarded.
+ */
+ cancel_work_sync(&dev_priv->rps.work);
+ gen6_reset_rps_interrupts(dev_priv);
}

/**
@@ -1103,13 +1105,6 @@ static void gen6_pm_rps_work(struct work_struct *work)
return;
}

- /*
- * The RPS work is synced during runtime suspend, we don't require a
- * wakeref. TODO: instead of disabling the asserts make sure that we
- * always hold an RPM reference while the work is running.
- */
- DISABLE_RPM_WAKEREF_ASSERTS(dev_priv);
-
pm_iir = dev_priv->rps.pm_iir;
dev_priv->rps.pm_iir = 0;
/* Make sure not to corrupt PMIMR state used by ringbuffer on GEN6 */
@@ -1122,7 +1117,7 @@ static void gen6_pm_rps_work(struct work_struct *work)
WARN_ON(pm_iir & ~dev_priv->pm_rps_events);

if ((pm_iir & dev_priv->pm_rps_events) == 0 && !client_boost)
- goto out;
+ return;

mutex_lock(&dev_priv->rps.hw_lock);

@@ -1177,8 +1172,6 @@ static void gen6_pm_rps_work(struct work_struct *work)
intel_set_rps(dev_priv->dev, new_delay);

mutex_unlock(&dev_priv->rps.hw_lock);
-out:
- ENABLE_RPM_WAKEREF_ASSERTS(dev_priv);
}


@@ -1618,7 +1611,7 @@ static void gen6_rps_irq_handler(struct drm_i915_private *dev_priv, u32 pm_iir)
gen6_disable_pm_irq(dev_priv, pm_iir & dev_priv->pm_rps_events);
if (dev_priv->rps.interrupts_enabled) {
dev_priv->rps.pm_iir |= pm_iir & dev_priv->pm_rps_events;
- queue_work(dev_priv->wq, &dev_priv->rps.work);
+ schedule_work(&dev_priv->rps.work);
}
spin_unlock(&dev_priv->irq_lock);
}
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 8e646780c971..57c54c9bc82b 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -16069,6 +16069,7 @@ void intel_modeset_cleanup(struct drm_device *dev)
struct drm_i915_private *dev_priv = dev->dev_private;
struct intel_connector *connector;

+ intel_suspend_gt_powersave(dev);
intel_disable_gt_powersave(dev);

intel_backlight_unregister(dev);
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index bdfe4035e074..1e082ab4f4d8 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -998,9 +998,9 @@ void gen5_enable_gt_irq(struct drm_i915_private *dev_priv, uint32_t mask);
void gen5_disable_gt_irq(struct drm_i915_private *dev_priv, uint32_t mask);
void gen6_enable_pm_irq(struct drm_i915_private *dev_priv, uint32_t mask);
void gen6_disable_pm_irq(struct drm_i915_private *dev_priv, uint32_t mask);
-void gen6_reset_rps_interrupts(struct drm_device *dev);
-void gen6_enable_rps_interrupts(struct drm_device *dev);
-void gen6_disable_rps_interrupts(struct drm_device *dev);
+void gen6_reset_rps_interrupts(struct drm_i915_private *dev_priv);
+void gen6_enable_rps_interrupts(struct drm_i915_private *dev_priv);
+void gen6_disable_rps_interrupts(struct drm_i915_private *dev_priv);
u32 gen6_sanitize_rps_pm_mask(struct drm_i915_private *dev_priv, u32 mask);
void intel_runtime_pm_disable_interrupts(struct drm_i915_private *dev_priv);
void intel_runtime_pm_enable_interrupts(struct drm_i915_private *dev_priv);
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 401c3770057d..e51ba529a97e 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -4475,17 +4475,24 @@ void gen6_rps_busy(struct drm_i915_private *dev_priv)
gen6_rps_reset_ei(dev_priv);
I915_WRITE(GEN6_PMINTRMSK,
gen6_rps_pm_mask(dev_priv, dev_priv->rps.cur_freq));
+
+ gen6_enable_rps_interrupts(dev_priv);
}
mutex_unlock(&dev_priv->rps.hw_lock);
}

void gen6_rps_idle(struct drm_i915_private *dev_priv)
{
- struct drm_device *dev = dev_priv->dev;
+ /* Flush our bottom-half so that it does not race with us
+ * setting the idle frequency and so that it is bounded by
+ * our rpm wakeref. And then disable the interrupts to stop any
+ * futher RPS reclocking whilst we are asleep.
+ */
+ gen6_disable_rps_interrupts(dev_priv);

mutex_lock(&dev_priv->rps.hw_lock);
if (dev_priv->rps.enabled) {
- if (IS_VALLEYVIEW(dev) || IS_CHERRYVIEW(dev))
+ if (IS_VALLEYVIEW(dev_priv) || IS_CHERRYVIEW(dev_priv))
vlv_set_rps_idle(dev_priv);
else
gen6_set_rps(dev_priv->dev, dev_priv->rps.idle_freq);
@@ -4523,7 +4530,7 @@ void gen6_rps_boost(struct drm_i915_private *dev_priv,
spin_lock_irq(&dev_priv->irq_lock);
if (dev_priv->rps.interrupts_enabled) {
dev_priv->rps.client_boost = true;
- queue_work(dev_priv->wq, &dev_priv->rps.work);
+ schedule_work(&dev_priv->rps.work);
}
spin_unlock_irq(&dev_priv->irq_lock);

@@ -6129,8 +6136,6 @@ static void gen6_suspend_rps(struct drm_device *dev)
struct drm_i915_private *dev_priv = dev->dev_private;

flush_delayed_work(&dev_priv->rps.delayed_resume_work);
-
- gen6_disable_rps_interrupts(dev);
}

/**
@@ -6161,8 +6166,6 @@ void intel_disable_gt_powersave(struct drm_device *dev)
if (IS_IRONLAKE_M(dev)) {
ironlake_disable_drps(dev);
} else if (INTEL_INFO(dev)->gen >= 6) {
- intel_suspend_gt_powersave(dev);
-
mutex_lock(&dev_priv->rps.hw_lock);
if (INTEL_INFO(dev)->gen >= 9)
gen9_disable_rps(dev);
@@ -6186,8 +6189,7 @@ static void intel_gen6_powersave_work(struct work_struct *work)
struct drm_device *dev = dev_priv->dev;

mutex_lock(&dev_priv->rps.hw_lock);
-
- gen6_reset_rps_interrupts(dev);
+ gen6_reset_rps_interrupts(dev_priv);

if (IS_CHERRYVIEW(dev)) {
cherryview_enable_rps(dev);
@@ -6213,9 +6215,6 @@ static void intel_gen6_powersave_work(struct work_struct *work)
WARN_ON(dev_priv->rps.efficient_freq > dev_priv->rps.max_freq);

dev_priv->rps.enabled = true;
-
- gen6_enable_rps_interrupts(dev);
-
mutex_unlock(&dev_priv->rps.hw_lock);

intel_runtime_pm_put(dev_priv);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:24 UTC
Permalink
In the next patch, request tracking is made more generic and for that we
need a new expanded struct and to separate out the logic changes from
the mechanical churn, we split out the structure renaming into this
patch.

v2: Writer's block. Add some spiel about why we track requests.
v3: Now i915_gem_active.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 10 +++---
drivers/gpu/drm/i915/i915_drv.h | 9 +++--
drivers/gpu/drm/i915/i915_gem.c | 56 +++++++++++++++---------------
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 4 +--
drivers/gpu/drm/i915/i915_gem_fence.c | 6 ++--
drivers/gpu/drm/i915/i915_gem_request.h | 38 ++++++++++++++++++++
drivers/gpu/drm/i915/i915_gem_tiling.c | 2 +-
drivers/gpu/drm/i915/i915_gpu_error.c | 6 ++--
drivers/gpu/drm/i915/intel_display.c | 10 +++---
9 files changed, 89 insertions(+), 52 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 8de944ed3369..65cb1d6a5d64 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -146,10 +146,10 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
obj->base.write_domain);
for_each_ring(ring, dev_priv, i)
seq_printf(m, "%x ",
- i915_gem_request_get_seqno(obj->last_read_req[i]));
+ i915_gem_request_get_seqno(obj->last_read[i].request));
seq_printf(m, "] %x %x%s%s%s",
- i915_gem_request_get_seqno(obj->last_write_req),
- i915_gem_request_get_seqno(obj->last_fenced_req),
+ i915_gem_request_get_seqno(obj->last_write.request),
+ i915_gem_request_get_seqno(obj->last_fence.request),
i915_cache_level_str(to_i915(obj->base.dev), obj->cache_level),
obj->dirty ? " dirty" : "",
obj->madv == I915_MADV_DONTNEED ? " purgeable" : "");
@@ -184,8 +184,8 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
*t = '\0';
seq_printf(m, " (%s mappable)", s);
}
- if (obj->last_write_req != NULL)
- seq_printf(m, " (%s)", obj->last_write_req->engine->name);
+ if (obj->last_write.request != NULL)
+ seq_printf(m, " (%s)", obj->last_write.request->engine->name);
if (obj->frontbuffer_bits)
seq_printf(m, " (frontbuffer: 0x%03x)", obj->frontbuffer_bits);
}
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index cae448e238ca..c577f86d94f8 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2110,11 +2110,10 @@ struct drm_i915_gem_object {
* requests on one ring where the write request is older than the
* read request. This allows for the CPU to read from an active
* buffer by only waiting for the write to complete.
- * */
- struct drm_i915_gem_request *last_read_req[I915_NUM_RINGS];
- struct drm_i915_gem_request *last_write_req;
- /** Breadcrumb of last fenced GPU access to the buffer. */
- struct drm_i915_gem_request *last_fenced_req;
+ */
+ struct i915_gem_active last_read[I915_NUM_RINGS];
+ struct i915_gem_active last_write;
+ struct i915_gem_active last_fence;

/** Current tiling stride for the object, if it's tiled. */
uint32_t stride;
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index b0230e7151ce..77c253ddf060 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1117,23 +1117,23 @@ i915_gem_object_wait_rendering(struct drm_i915_gem_object *obj,
return 0;

if (readonly) {
- if (obj->last_write_req != NULL) {
- ret = i915_wait_request(obj->last_write_req);
+ if (obj->last_write.request != NULL) {
+ ret = i915_wait_request(obj->last_write.request);
if (ret)
return ret;

- i = obj->last_write_req->engine->id;
- if (obj->last_read_req[i] == obj->last_write_req)
+ i = obj->last_write.request->engine->id;
+ if (obj->last_read[i].request == obj->last_write.request)
i915_gem_object_retire__read(obj, i);
else
i915_gem_object_retire__write(obj);
}
} else {
for (i = 0; i < I915_NUM_RINGS; i++) {
- if (obj->last_read_req[i] == NULL)
+ if (obj->last_read[i].request == NULL)
continue;

- ret = i915_wait_request(obj->last_read_req[i]);
+ ret = i915_wait_request(obj->last_read[i].request);
if (ret)
return ret;

@@ -1151,9 +1151,9 @@ i915_gem_object_retire_request(struct drm_i915_gem_object *obj,
{
int ring = req->engine->id;

- if (obj->last_read_req[ring] == req)
+ if (obj->last_read[ring].request == req)
i915_gem_object_retire__read(obj, ring);
- else if (obj->last_write_req == req)
+ else if (obj->last_write.request == req)
i915_gem_object_retire__write(obj);

i915_gem_request_retire_upto(req);
@@ -1181,7 +1181,7 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
if (readonly) {
struct drm_i915_gem_request *req;

- req = obj->last_write_req;
+ req = obj->last_write.request;
if (req == NULL)
return 0;

@@ -1190,7 +1190,7 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,
for (i = 0; i < I915_NUM_RINGS; i++) {
struct drm_i915_gem_request *req;

- req = obj->last_read_req[i];
+ req = obj->last_read[i].request;
if (req == NULL)
continue;

@@ -2070,7 +2070,7 @@ void i915_vma_move_to_active(struct i915_vma *vma,
obj->active |= intel_engine_flag(engine);

list_move_tail(&obj->ring_list[engine->id], &engine->active_list);
- i915_gem_request_assign(&obj->last_read_req[engine->id], req);
+ i915_gem_request_mark_active(req, &obj->last_read[engine->id]);

list_move_tail(&vma->mm_list, &vma->vm->active_list);
}
@@ -2078,10 +2078,10 @@ void i915_vma_move_to_active(struct i915_vma *vma,
static void
i915_gem_object_retire__write(struct drm_i915_gem_object *obj)
{
- GEM_BUG_ON(obj->last_write_req == NULL);
- GEM_BUG_ON(!(obj->active & intel_engine_flag(obj->last_write_req->engine)));
+ GEM_BUG_ON(obj->last_write.request == NULL);
+ GEM_BUG_ON(!(obj->active & intel_engine_flag(obj->last_write.request->engine)));

- i915_gem_request_assign(&obj->last_write_req, NULL);
+ i915_gem_request_assign(&obj->last_write.request, NULL);
intel_fb_obj_flush(obj, true, ORIGIN_CS);
}

@@ -2090,13 +2090,13 @@ i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring)
{
struct i915_vma *vma;

- GEM_BUG_ON(obj->last_read_req[ring] == NULL);
+ GEM_BUG_ON(obj->last_read[ring].request == NULL);
GEM_BUG_ON(!(obj->active & (1 << ring)));

list_del_init(&obj->ring_list[ring]);
- i915_gem_request_assign(&obj->last_read_req[ring], NULL);
+ i915_gem_request_assign(&obj->last_read[ring].request, NULL);

- if (obj->last_write_req && obj->last_write_req->engine->id == ring)
+ if (obj->last_write.request && obj->last_write.request->engine->id == ring)
i915_gem_object_retire__write(obj);

obj->active &= ~(1 << ring);
@@ -2115,7 +2115,7 @@ i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring)
list_move_tail(&vma->mm_list, &vma->vm->inactive_list);
}

- i915_gem_request_assign(&obj->last_fenced_req, NULL);
+ i915_gem_request_assign(&obj->last_fence.request, NULL);
drm_gem_object_unreference(&obj->base);
}

@@ -2336,7 +2336,7 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
struct drm_i915_gem_object,
ring_list[ring->id]);

- if (!list_empty(&obj->last_read_req[ring->id]->list))
+ if (!list_empty(&obj->last_read[ring->id].request->list))
break;

i915_gem_object_retire__read(obj, ring->id);
@@ -2445,7 +2445,7 @@ i915_gem_object_flush_active(struct drm_i915_gem_object *obj)
for (i = 0; i < I915_NUM_RINGS; i++) {
struct drm_i915_gem_request *req;

- req = obj->last_read_req[i];
+ req = obj->last_read[i].request;
if (req == NULL)
continue;

@@ -2525,10 +2525,10 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
drm_gem_object_unreference(&obj->base);

for (i = 0; i < I915_NUM_RINGS; i++) {
- if (obj->last_read_req[i] == NULL)
+ if (obj->last_read[i].request == NULL)
continue;

- req[n++] = i915_gem_request_get(obj->last_read_req[i]);
+ req[n++] = i915_gem_request_get(obj->last_read[i].request);
}

mutex_unlock(&dev->struct_mutex);
@@ -2619,12 +2619,12 @@ i915_gem_object_sync(struct drm_i915_gem_object *obj,

n = 0;
if (readonly) {
- if (obj->last_write_req)
- req[n++] = obj->last_write_req;
+ if (obj->last_write.request)
+ req[n++] = obj->last_write.request;
} else {
for (i = 0; i < I915_NUM_RINGS; i++)
- if (obj->last_read_req[i])
- req[n++] = obj->last_read_req[i];
+ if (obj->last_read[i].request)
+ req[n++] = obj->last_read[i].request;
}
for (i = 0; i < n; i++) {
ret = __i915_gem_object_sync(obj, to, req[i]);
@@ -3695,8 +3695,8 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,

BUILD_BUG_ON(I915_NUM_RINGS > 16);
args->busy = obj->active << 16;
- if (obj->last_write_req)
- args->busy |= obj->last_write_req->engine->id;
+ if (obj->last_write.request)
+ args->busy |= obj->last_write.request->engine->id;

unref:
drm_gem_object_unreference(&obj->base);
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 6dee27224ddb..56d6b5dbb121 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1125,7 +1125,7 @@ i915_gem_execbuffer_move_to_active(struct list_head *vmas,

i915_vma_move_to_active(vma, req);
if (obj->base.write_domain) {
- i915_gem_request_assign(&obj->last_write_req, req);
+ i915_gem_request_mark_active(req, &obj->last_write);

intel_fb_obj_invalidate(obj, ORIGIN_CS);

@@ -1133,7 +1133,7 @@ i915_gem_execbuffer_move_to_active(struct list_head *vmas,
obj->base.write_domain &= ~I915_GEM_GPU_DOMAINS;
}
if (entry->flags & EXEC_OBJECT_NEEDS_FENCE) {
- i915_gem_request_assign(&obj->last_fenced_req, req);
+ i915_gem_request_mark_active(req, &obj->last_fence);
if (entry->flags & __EXEC_OBJECT_HAS_FENCE) {
struct drm_i915_private *dev_priv = req->i915;
list_move_tail(&dev_priv->fence_regs[obj->fence_reg].lru_list,
diff --git a/drivers/gpu/drm/i915/i915_gem_fence.c b/drivers/gpu/drm/i915/i915_gem_fence.c
index 598198543dcd..ab29c237ffa9 100644
--- a/drivers/gpu/drm/i915/i915_gem_fence.c
+++ b/drivers/gpu/drm/i915/i915_gem_fence.c
@@ -261,12 +261,12 @@ static inline void i915_gem_object_fence_lost(struct drm_i915_gem_object *obj)
static int
i915_gem_object_wait_fence(struct drm_i915_gem_object *obj)
{
- if (obj->last_fenced_req) {
- int ret = i915_wait_request(obj->last_fenced_req);
+ if (obj->last_fence.request) {
+ int ret = i915_wait_request(obj->last_fence.request);
if (ret)
return ret;

- i915_gem_request_assign(&obj->last_fenced_req, NULL);
+ i915_gem_request_assign(&obj->last_fence.request, NULL);
}

return 0;
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index 2da9e0b5dfc7..0a21986c332b 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -208,4 +208,42 @@ static inline bool i915_gem_request_completed(struct drm_i915_gem_request *req)
req->fence.seqno);
}

+/* We treat requests as fences. This is not be to confused with our
+ * "fence registers" but pipeline synchronisation objects ala GL_ARB_sync.
+ * We use the fences to synchronize access from the CPU with activity on the
+ * GPU, for example, we should not rewrite an object's PTE whilst the GPU
+ * is reading them. We also track fences at a higher level to provide
+ * implicit synchronisation around GEM objects, e.g. set-domain will wait
+ * for outstanding GPU rendering before marking the object ready for CPU
+ * access, or a pageflip will wait until the GPU is complete before showing
+ * the frame on the scanout.
+ *
+ * In order to use a fence, the object must track the fence it needs to
+ * serialise with. For example, GEM objects want to track both read and
+ * write access so that we can perform concurrent read operations between
+ * the CPU and GPU engines, as well as waiting for all rendering to
+ * complete, or waiting for the last GPU user of a "fence register". The
+ * object then embeds a @i915_gem_active to track the most recent (in
+ * retirment order) request relevant for the desired mode of access.
+ * The @i915_gem_active is updated with i915_gem_request_mark_active() to
+ * track the most recent fence request, typically this is done as part of
+ * i915_vma_move_to_active().
+ *
+ * When the @i915_gem_active completes (is retired), it will
+ * signal its completion to the owner through a callback as well as mark
+ * itself as idle (i915_gem_active.request == NULL). The owner
+ * can then perform any action, such as delayed freeing of an active
+ * resource including itself.
+ */
+struct i915_gem_active {
+ struct drm_i915_gem_request *request;
+};
+
+static inline void
+i915_gem_request_mark_active(struct drm_i915_gem_request *request,
+ struct i915_gem_active *active)
+{
+ i915_gem_request_assign(&active->request, request);
+}
+
#endif /* I915_GEM_REQUEST_H */
diff --git a/drivers/gpu/drm/i915/i915_gem_tiling.c b/drivers/gpu/drm/i915/i915_gem_tiling.c
index 7410f6c962e7..c7588135a82d 100644
--- a/drivers/gpu/drm/i915/i915_gem_tiling.c
+++ b/drivers/gpu/drm/i915/i915_gem_tiling.c
@@ -242,7 +242,7 @@ i915_gem_set_tiling(struct drm_device *dev, void *data,
}

obj->fence_dirty =
- obj->last_fenced_req ||
+ obj->last_fence.request ||
obj->fence_reg != I915_FENCE_REG_NONE;

obj->tiling_mode = args->tiling_mode;
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 2785f2d1f073..5027636e3624 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -708,8 +708,8 @@ static void capture_bo(struct drm_i915_error_buffer *err,
err->size = obj->base.size;
err->name = obj->base.name;
for (i = 0; i < I915_NUM_RINGS; i++)
- err->rseqno[i] = i915_gem_request_get_seqno(obj->last_read_req[i]);
- err->wseqno = i915_gem_request_get_seqno(obj->last_write_req);
+ err->rseqno[i] = i915_gem_request_get_seqno(obj->last_read[i].request);
+ err->wseqno = i915_gem_request_get_seqno(obj->last_write.request);
err->gtt_offset = vma->node.start;
err->read_domains = obj->base.read_domains;
err->write_domain = obj->base.write_domain;
@@ -721,7 +721,7 @@ static void capture_bo(struct drm_i915_error_buffer *err,
err->dirty = obj->dirty;
err->purgeable = obj->madv != I915_MADV_WILLNEED;
err->userptr = obj->userptr.mm != NULL;
- err->ring = obj->last_write_req ? obj->last_write_req->engine->id : -1;
+ err->ring = obj->last_write.request ? obj->last_write.request->engine->id : -1;
err->cache_level = obj->cache_level;
}

diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index ec52fff7e0b0..eef858d5376f 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11310,7 +11310,7 @@ static bool use_mmio_flip(struct intel_engine_cs *ring,
false))
return true;
else
- return ring != i915_gem_request_get_engine(obj->last_write_req);
+ return ring != i915_gem_request_get_engine(obj->last_write.request);
}

static void skl_do_mmio_flip(struct intel_crtc *intel_crtc,
@@ -11455,7 +11455,7 @@ static int intel_queue_mmio_flip(struct drm_device *dev,
return -ENOMEM;

mmio_flip->i915 = to_i915(dev);
- mmio_flip->req = i915_gem_request_get(obj->last_write_req);
+ mmio_flip->req = i915_gem_request_get(obj->last_write.request);
mmio_flip->crtc = to_intel_crtc(crtc);
mmio_flip->rotation = crtc->primary->state->rotation;

@@ -11654,7 +11654,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
} else if (IS_IVYBRIDGE(dev) || IS_HASWELL(dev)) {
ring = &dev_priv->ring[BCS];
} else if (INTEL_INFO(dev)->gen >= 7) {
- ring = i915_gem_request_get_engine(obj->last_write_req);
+ ring = i915_gem_request_get_engine(obj->last_write.request);
if (ring == NULL || ring->id != RCS)
ring = &dev_priv->ring[BCS];
} else {
@@ -11695,7 +11695,7 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
goto cleanup_unpin;

i915_gem_request_assign(&work->flip_queued_req,
- obj->last_write_req);
+ obj->last_write.request);
} else {
ret = dev_priv->display.queue_flip(dev, crtc, fb, obj, request,
page_flip_flags);
@@ -13895,7 +13895,7 @@ intel_prepare_plane_fb(struct drm_plane *plane,
to_intel_plane_state(new_state);

i915_gem_request_assign(&plane_state->wait_req,
- obj->last_write_req);
+ obj->last_write.request);
}

i915_gem_track_fb(old_obj, obj, intel_plane->frontbuffer_bit);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:19 UTC
Permalink
Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem_request.c | 8 +-
drivers/gpu/drm/i915/intel_lrc.c | 14 ++--
drivers/gpu/drm/i915/intel_ringbuffer.c | 129 +++++++++++++++++---------------
drivers/gpu/drm/i915/intel_ringbuffer.h | 21 +++---
4 files changed, 87 insertions(+), 85 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index ce663acc9c7d..01443d8d9224 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -434,13 +434,7 @@ void __i915_add_request(struct drm_i915_gem_request *request,
*/
request->postfix = intel_ring_get_tail(ring);

- if (i915.enable_execlists)
- ret = request->engine->emit_request(request);
- else {
- ret = request->engine->add_request(request);
-
- request->tail = intel_ring_get_tail(ring);
- }
+ ret = request->engine->add_request(request);
/* Not allowed to fail! */
WARN(ret, "emit|add_request failed: %d!\n", ret);

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 30effca91184..9838503fafca 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -445,7 +445,7 @@ static void execlists_context_unqueue(struct intel_engine_cs *engine)
if (req0->elsp_submitted) {
/*
* Apply the wa NOOPS to prevent ring:HEAD == req:TAIL
- * as we resubmit the request. See gen8_emit_request()
+ * as we resubmit the request. See gen8_add_request()
* for where we prepare the padding after the end of the
* request.
*/
@@ -1588,7 +1588,7 @@ gen6_seqno_barrier(struct intel_engine_cs *ring)
intel_flush_status_page(ring, I915_GEM_HWS_INDEX);
}

-static int gen8_emit_request(struct drm_i915_gem_request *request)
+static int gen8_add_request(struct drm_i915_gem_request *request)
{
struct intel_ring *ring = request->ring;
u32 cmd;
@@ -1782,8 +1782,8 @@ static int logical_render_ring_init(struct drm_device *dev)
ring->init_context = gen8_init_rcs_context;
ring->cleanup = intel_fini_pipe_control;
ring->irq_seqno_barrier = gen6_seqno_barrier;
- ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush_render;
+ ring->add_request = gen8_add_request;
ring->irq_enable = gen8_logical_ring_enable_irq;
ring->irq_disable = gen8_logical_ring_disable_irq;
ring->emit_bb_start = gen8_emit_bb_start;
@@ -1828,8 +1828,8 @@ static int logical_bsd_ring_init(struct drm_device *dev)

ring->init_hw = gen8_init_common_ring;
ring->irq_seqno_barrier = gen6_seqno_barrier;
- ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush;
+ ring->add_request = gen8_add_request;
ring->irq_enable = gen8_logical_ring_enable_irq;
ring->irq_disable = gen8_logical_ring_disable_irq;
ring->emit_bb_start = gen8_emit_bb_start;
@@ -1852,8 +1852,8 @@ static int logical_bsd2_ring_init(struct drm_device *dev)

ring->init_hw = gen8_init_common_ring;
ring->irq_seqno_barrier = gen6_seqno_barrier;
- ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush;
+ ring->add_request = gen8_add_request;
ring->irq_enable = gen8_logical_ring_enable_irq;
ring->irq_disable = gen8_logical_ring_disable_irq;
ring->emit_bb_start = gen8_emit_bb_start;
@@ -1876,8 +1876,8 @@ static int logical_blt_ring_init(struct drm_device *dev)

ring->init_hw = gen8_init_common_ring;
ring->irq_seqno_barrier = gen6_seqno_barrier;
- ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush;
+ ring->add_request = gen8_add_request;
ring->irq_enable = gen8_logical_ring_enable_irq;
ring->irq_disable = gen8_logical_ring_disable_irq;
ring->emit_bb_start = gen8_emit_bb_start;
@@ -1900,8 +1900,8 @@ static int logical_vebox_ring_init(struct drm_device *dev)

ring->init_hw = gen8_init_common_ring;
ring->irq_seqno_barrier = gen6_seqno_barrier;
- ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush;
+ ring->add_request = gen8_add_request;
ring->irq_enable = gen8_logical_ring_enable_irq;
ring->irq_disable = gen8_logical_ring_disable_irq;
ring->emit_bb_start = gen8_emit_bb_start;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 04f0a77d49cf..556e9e2c1fec 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -59,13 +59,6 @@ int intel_ring_space(struct intel_ring *ringbuf)
return ringbuf->space;
}

-static void __intel_ring_advance(struct intel_engine_cs *ring)
-{
- struct intel_ring *ringbuf = ring->buffer;
- ringbuf->tail &= ringbuf->size - 1;
- ring->write_tail(ring, ringbuf->tail);
-}
-
static int
gen2_render_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains,
@@ -418,13 +411,6 @@ gen8_render_ring_flush(struct drm_i915_gem_request *req,
return gen8_emit_pipe_control(req, flags, scratch_addr);
}

-static void ring_write_tail(struct intel_engine_cs *ring,
- u32 value)
-{
- struct drm_i915_private *dev_priv = ring->dev->dev_private;
- I915_WRITE_TAIL(ring, value);
-}
-
u64 intel_engine_get_active_head(struct intel_engine_cs *engine)
{
struct drm_i915_private *dev_priv = engine->i915;
@@ -533,7 +519,7 @@ static bool stop_ring(struct intel_engine_cs *ring)

I915_WRITE_CTL(ring, 0);
I915_WRITE_HEAD(ring, 0);
- ring->write_tail(ring, 0);
+ I915_WRITE_TAIL(ring, 0);

if (!IS_GEN2(ring->dev)) {
(void)I915_READ_CTL(ring);
@@ -1308,6 +1294,7 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
static int
gen6_add_request(struct drm_i915_gem_request *req)
{
+ struct drm_i915_private *dev_priv = req->i915;
struct intel_ring *ring = req->ring;
int ret;

@@ -1323,7 +1310,61 @@ gen6_add_request(struct drm_i915_gem_request *req)
intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
intel_ring_emit(ring, req->fence.seqno);
intel_ring_emit(ring, MI_USER_INTERRUPT);
- __intel_ring_advance(req->engine);
+ intel_ring_advance(ring);
+
+ req->tail = intel_ring_get_tail(ring);
+ I915_WRITE_TAIL(req->engine, req->tail);
+
+ return 0;
+}
+
+static int
+gen6_bsd_add_request(struct drm_i915_gem_request *req)
+{
+ struct drm_i915_private *dev_priv = req->i915;
+ struct intel_ring *ring = req->ring;
+ int ret;
+
+ if (req->engine->semaphore.signal)
+ ret = req->engine->semaphore.signal(req, 4);
+ else
+ ret = intel_ring_begin(req, 4);
+ if (ret)
+ return ret;
+
+ intel_ring_emit(ring, MI_STORE_DWORD_INDEX);
+ intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
+ intel_ring_emit(ring, req->fence.seqno);
+ intel_ring_emit(ring, MI_USER_INTERRUPT);
+ intel_ring_advance(ring);
+
+ /* Every tail move must follow the sequence below */
+
+ /* Disable notification that the ring is IDLE. The GT
+ * will then assume that it is busy and bring it out of rc6.
+ */
+ I915_WRITE(GEN6_BSD_SLEEP_PSMI_CONTROL,
+ _MASKED_BIT_ENABLE(GEN6_BSD_SLEEP_MSG_DISABLE));
+
+ /* Clear the context id. Here be magic! */
+ I915_WRITE64(GEN6_BSD_RNCID, 0x0);
+
+ /* Wait for the ring not to be idle, i.e. for it to wake up. */
+ if (wait_for((I915_READ(GEN6_BSD_SLEEP_PSMI_CONTROL) &
+ GEN6_BSD_SLEEP_INDICATOR) == 0,
+ 50))
+ DRM_ERROR("timed out waiting for the BSD ring to wake up\n");
+
+ /* Now that the ring is fully powered up, update the tail */
+ req->tail = intel_ring_get_tail(ring);
+ I915_WRITE_TAIL(req->engine, req->tail);
+ POSTING_READ(RING_TAIL(req->engine->mmio_base));
+
+ /* Let the ring send IDLE messages to the GT again,
+ * and so let it sleep to conserve power when idle.
+ */
+ I915_WRITE(GEN6_BSD_SLEEP_PSMI_CONTROL,
+ _MASKED_BIT_DISABLE(GEN6_BSD_SLEEP_MSG_DISABLE));

return 0;
}
@@ -1423,6 +1464,7 @@ do { \
static int
pc_render_add_request(struct drm_i915_gem_request *req)
{
+ struct drm_i915_private *dev_priv = req->i915;
struct intel_ring *ring = req->ring;
u32 addr = req->engine->status_page.gfx_addr +
(I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
@@ -1467,7 +1509,10 @@ pc_render_add_request(struct drm_i915_gem_request *req)
intel_ring_emit(ring, addr | PIPE_CONTROL_GLOBAL_GTT);
intel_ring_emit(ring, req->fence.seqno);
intel_ring_emit(ring, 0);
- __intel_ring_advance(req->engine);
+ intel_ring_advance(ring);
+
+ req->tail = intel_ring_get_tail(ring);
+ I915_WRITE_TAIL(req->engine, req->tail);

return 0;
}
@@ -1566,6 +1611,7 @@ bsd_ring_flush(struct drm_i915_gem_request *req,
static int
i9xx_add_request(struct drm_i915_gem_request *req)
{
+ struct drm_i915_private *dev_priv = req->i915;
struct intel_ring *ring = req->ring;
int ret;

@@ -1577,7 +1623,10 @@ i9xx_add_request(struct drm_i915_gem_request *req)
intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
intel_ring_emit(ring, req->fence.seqno);
intel_ring_emit(ring, MI_USER_INTERRUPT);
- __intel_ring_advance(req->engine);
+ intel_ring_advance(ring);
+
+ req->tail = intel_ring_get_tail(ring);
+ I915_WRITE_TAIL(req->engine, req->tail);

return 0;
}
@@ -2283,39 +2332,6 @@ void intel_engine_init_seqno(struct intel_engine_cs *ring, u32 seqno)
ring->hangcheck.seqno = seqno;
}

-static void gen6_bsd_ring_write_tail(struct intel_engine_cs *ring,
- u32 value)
-{
- struct drm_i915_private *dev_priv = ring->dev->dev_private;
-
- /* Every tail move must follow the sequence below */
-
- /* Disable notification that the ring is IDLE. The GT
- * will then assume that it is busy and bring it out of rc6.
- */
- I915_WRITE(GEN6_BSD_SLEEP_PSMI_CONTROL,
- _MASKED_BIT_ENABLE(GEN6_BSD_SLEEP_MSG_DISABLE));
-
- /* Clear the context id. Here be magic! */
- I915_WRITE64(GEN6_BSD_RNCID, 0x0);
-
- /* Wait for the ring not to be idle, i.e. for it to wake up. */
- if (wait_for((I915_READ(GEN6_BSD_SLEEP_PSMI_CONTROL) &
- GEN6_BSD_SLEEP_INDICATOR) == 0,
- 50))
- DRM_ERROR("timed out waiting for the BSD ring to wake up\n");
-
- /* Now that the ring is fully powered up, update the tail */
- I915_WRITE_TAIL(ring, value);
- POSTING_READ(RING_TAIL(ring->mmio_base));
-
- /* Let the ring send IDLE messages to the GT again,
- * and so let it sleep to conserve power when idle.
- */
- I915_WRITE(GEN6_BSD_SLEEP_PSMI_CONTROL,
- _MASKED_BIT_DISABLE(GEN6_BSD_SLEEP_MSG_DISABLE));
-}
-
static int gen6_bsd_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate, u32 flush)
{
@@ -2575,7 +2591,6 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
}
ring->irq_enable_mask = I915_USER_INTERRUPT;
}
- ring->write_tail = ring_write_tail;

if (IS_HASWELL(dev))
ring->emit_bb_start = hsw_emit_bb_start;
@@ -2632,14 +2647,13 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
ring->name = "bsd ring";
ring->id = VCS;

- ring->write_tail = ring_write_tail;
if (INTEL_INFO(dev)->gen >= 6) {
ring->mmio_base = GEN6_BSD_RING_BASE;
- /* gen6 bsd needs a special wa for tail updates */
- if (IS_GEN6(dev))
- ring->write_tail = gen6_bsd_ring_write_tail;
ring->emit_flush = gen6_bsd_ring_flush;
+ /* gen6 bsd needs a special wa for tail updates */
ring->add_request = gen6_add_request;
+ if (IS_GEN6(dev))
+ ring->add_request = gen6_bsd_add_request;
ring->irq_seqno_barrier = gen6_seqno_barrier;
if (INTEL_INFO(dev)->gen >= 8) {
ring->irq_enable_mask =
@@ -2703,7 +2717,6 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev)
ring->name = "bsd2 ring";
ring->id = VCS2;

- ring->write_tail = ring_write_tail;
ring->mmio_base = GEN8_BSD2_RING_BASE;
ring->emit_flush = gen6_bsd_ring_flush;
ring->add_request = gen6_add_request;
@@ -2732,7 +2745,6 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
ring->id = BCS;

ring->mmio_base = BLT_RING_BASE;
- ring->write_tail = ring_write_tail;
ring->emit_flush = gen6_ring_flush;
ring->add_request = gen6_add_request;
ring->irq_seqno_barrier = gen6_seqno_barrier;
@@ -2788,7 +2800,6 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
ring->id = VECS;

ring->mmio_base = VEBOX_RING_BASE;
- ring->write_tail = ring_write_tail;
ring->emit_flush = gen6_ring_flush;
ring->add_request = gen6_add_request;
ring->irq_seqno_barrier = gen6_seqno_barrier;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 3a10376b896f..8147ce1379fb 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -213,8 +213,15 @@ struct intel_engine_cs {

int (*init_context)(struct drm_i915_gem_request *req);

- void (*write_tail)(struct intel_engine_cs *ring,
- u32 value);
+ int (*emit_flush)(struct drm_i915_gem_request *request,
+ u32 invalidate_domains,
+ u32 flush_domains);
+ int (*emit_bb_start)(struct drm_i915_gem_request *req,
+ u64 offset, u32 length,
+ unsigned dispatch_flags);
+#define I915_DISPATCH_SECURE 0x1
+#define I915_DISPATCH_PINNED 0x2
+#define I915_DISPATCH_RS 0x4
int (*add_request)(struct drm_i915_gem_request *req);
/* Some chipsets are not quite as coherent as advertised and need
* an expensive kick to force a true read of the up-to-date seqno.
@@ -290,16 +297,6 @@ struct intel_engine_cs {
struct list_head execlist_retired_req_list;
u8 next_context_status_buffer;
u32 irq_keep_mask; /* bitmask for interrupts that should not be masked */
- int (*emit_request)(struct drm_i915_gem_request *request);
- int (*emit_flush)(struct drm_i915_gem_request *request,
- u32 invalidate_domains,
- u32 flush_domains);
- int (*emit_bb_start)(struct drm_i915_gem_request *req,
- u64 offset, u32 length,
- unsigned dispatch_flags);
-#define I915_DISPATCH_SECURE 0x1
-#define I915_DISPATCH_PINNED 0x2
-#define I915_DISPATCH_RS 0x4

/**
* List of objects currently involved in rendering from the
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:40 UTC
Permalink
If we convert the tracing over from direct use of ring->irq_get() and
over to the breadcrumb infrastructure, we only have a single user of the
ring->irq_get and so we will be able to simplify the driver routines
(eliminating the redundant validation and irq refcounting).

v2: Move to a signaling framework based upon the waiter.
v3: Track the first-signal to avoid having to walk the rbtree everytime.
v4: Mark the signaler thread as RT priority to reduce latency in the
indirect wakeups.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_drv.h | 8 --
drivers/gpu/drm/i915/i915_gem.c | 6 --
drivers/gpu/drm/i915/i915_irq.c | 7 +-
drivers/gpu/drm/i915/i915_trace.h | 2 +-
drivers/gpu/drm/i915/intel_breadcrumbs.c | 177 +++++++++++++++++++++++++++++++
drivers/gpu/drm/i915/intel_ringbuffer.h | 7 +-
6 files changed, 186 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 8940b8d3fa59..7f021505e32f 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3620,14 +3620,6 @@ wait_remaining_ms_from_jiffies(unsigned long timestamp_jiffies, int to_wait_ms)
schedule_timeout_uninterruptible(remaining_jiffies);
}
}
-
-static inline void i915_trace_irq_get(struct intel_engine_cs *ring,
- struct drm_i915_gem_request *req)
-{
- if (ring->trace_irq_req == NULL && ring->irq_get(ring))
- i915_gem_request_assign(&ring->trace_irq_req, req);
-}
-
static inline bool __i915_request_irq_complete(struct drm_i915_gem_request *req)
{
struct intel_engine_cs *engine = req->ring;
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index a713e8a6cb36..5ddb2ed0f785 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2889,12 +2889,6 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
i915_gem_object_retire__read(obj, ring->id);
}

- if (unlikely(ring->trace_irq_req &&
- i915_gem_request_completed(ring->trace_irq_req))) {
- ring->irq_put(ring);
- i915_gem_request_assign(&ring->trace_irq_req, NULL);
- }
-
WARN_ON(i915_verify_lists(ring->dev));
}

diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 738edd7fbf8d..bf48fa63127a 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -996,12 +996,9 @@ static void ironlake_rps_change_irq_handler(struct drm_device *dev)

static void notify_ring(struct intel_engine_cs *ring)
{
- if (!intel_ring_initialized(ring))
- return;
-
- trace_i915_gem_request_notify(ring);
ring->irq_posted = true; /* paired with mb() in wake_up_process() */
- intel_engine_wakeup(ring);
+ if (intel_engine_wakeup(ring))
+ trace_i915_gem_request_notify(ring);
}

static void vlv_c0_read(struct drm_i915_private *dev_priv,
diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
index efca75bcace3..43bb2e0bb949 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -503,7 +503,7 @@ TRACE_EVENT(i915_gem_ring_dispatch,
__entry->ring = ring->id;
__entry->seqno = i915_gem_request_get_seqno(req);
__entry->flags = flags;
- i915_trace_irq_get(ring, req);
+ intel_engine_enable_signaling(req);
),

TP_printk("dev=%u, ring=%u, seqno=%u, flags=%x",
diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
index d689bd61534e..cf9cbcc2d5d7 100644
--- a/drivers/gpu/drm/i915/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
@@ -22,6 +22,8 @@
*
*/

+#include <linux/kthread.h>
+
#include "i915_drv.h"

static void intel_breadcrumbs_fake_irq(unsigned long data)
@@ -320,10 +322,185 @@ void intel_engine_init_breadcrumbs(struct intel_engine_cs *engine)
(unsigned long)engine);
}

+struct signal {
+ struct rb_node node;
+ struct intel_wait wait;
+ struct drm_i915_gem_request *request;
+};
+
+static bool signal_complete(struct signal *signal)
+{
+ if (signal == NULL)
+ return false;
+
+ /* If another process served as the bottom-half it may have already
+ * signalled that this wait is already completed.
+ */
+ if (intel_wait_complete(&signal->wait))
+ return true;
+
+ /* Carefully check if the request is complete, giving time for the
+ * seqno to be visible or if the GPU hung.
+ */
+ if (__i915_request_irq_complete(signal->request))
+ return true;
+
+ return false;
+}
+
+static struct signal *to_signal(struct rb_node *rb)
+{
+ return container_of(rb, struct signal, node);
+}
+
+static void signaler_set_rtpriority(void)
+{
+ struct sched_param param = { .sched_priority = 1 };
+ sched_setscheduler_nocheck(current, SCHED_FIFO, &param);
+}
+
+static int intel_breadcrumbs_signaler(void *arg)
+{
+ struct intel_engine_cs *engine = arg;
+ struct intel_breadcrumbs *b = &engine->breadcrumbs;
+ struct signal *signal;
+
+ /* Install ourselves with high priority to reduce signalling latency */
+ signaler_set_rtpriority();
+
+ do {
+ set_current_state(TASK_INTERRUPTIBLE);
+
+ /* We are either woken up by the interrupt bottom-half,
+ * or by a client adding a new signaller. In both cases,
+ * the GPU seqno may have advanced beyond our oldest signal.
+ * If it has, propagate the signal, remove the waiter and
+ * check again with the next oldest signal. Otherwise we
+ * need to wait for a new interrupt from the GPU or for
+ * a new client.
+ */
+ signal = READ_ONCE(b->first_signal);
+ if (signal_complete(signal)) {
+ /* Wake up all other completed waiters and select the
+ * next bottom-half for the next user interrupt.
+ */
+ intel_engine_remove_wait(engine, &signal->wait);
+
+ i915_gem_request_unreference__unlocked(signal->request);
+
+ /* Find the next oldest signal. Note that as we have
+ * not been holding the lock, another client may
+ * have installed an even older signal than the one
+ * we just completed - so double check we are still
+ * the oldest before picking the next one.
+ */
+ spin_lock(&b->lock);
+ if (signal == b->first_signal)
+ b->first_signal = rb_next(&signal->node);
+ rb_erase(&signal->node, &b->signals);
+ spin_unlock(&b->lock);
+
+ kfree(signal);
+ } else {
+ if (kthread_should_stop())
+ break;
+
+ schedule();
+ }
+ } while (1);
+
+ return 0;
+}
+
+int intel_engine_enable_signaling(struct drm_i915_gem_request *request)
+{
+ struct intel_engine_cs *engine = request->ring;
+ struct intel_breadcrumbs *b = &engine->breadcrumbs;
+ struct rb_node *parent, **p;
+ struct task_struct *task;
+ struct signal *signal;
+ bool first;
+
+ signal = kmalloc(sizeof(*signal), GFP_ATOMIC);
+ if (unlikely(signal == NULL))
+ return -ENOMEM;
+
+ /* Spawn a thread to provide a common bottom-half for all signals.
+ * As this is an asynchronous interface we cannot steal the current
+ * task for handling the bottom-half to the user interrupt, therefore
+ * we create a thread to do the coherent seqno dance after the
+ * interrupt and then signal the waitqueue (via the dma-buf/fence).
+ */
+ task = READ_ONCE(b->signaler);
+ if (unlikely(task == NULL)) {
+ spin_lock(&b->lock);
+ task = b->signaler;
+ if (task == NULL) {
+ task = kthread_create(intel_breadcrumbs_signaler,
+ engine,
+ "irq/i915:%d",
+ engine->id);
+ if (!IS_ERR(task))
+ b->signaler = task;
+ }
+ spin_unlock(&b->lock);
+
+ if (IS_ERR(task)) {
+ kfree(signal);
+ return PTR_ERR(task);
+ }
+ }
+
+ signal->wait.task = task;
+ signal->wait.seqno = request->seqno;
+
+ signal->request = i915_gem_request_reference(request);
+
+ /* Insert ourselves into the retirement ordered list of signals
+ * on this engine. We track the oldest seqno as that will be the
+ * first signal to complete.
+ */
+ spin_lock(&b->lock);
+ parent = NULL;
+ first = true;
+ p = &b->signals.rb_node;
+ while (*p) {
+ parent = *p;
+ if (i915_seqno_passed(signal->wait.seqno,
+ to_signal(parent)->wait.seqno)) {
+ p = &parent->rb_right;
+ first = false;
+ } else
+ p = &parent->rb_left;
+ }
+ rb_link_node(&signal->node, parent, p);
+ rb_insert_color(&signal->node, &b->signals);
+ if (first)
+ smp_store_mb(b->first_signal, signal);
+ spin_unlock(&b->lock);
+
+ /* Now add ourselves into the list of waiters, but register our
+ * bottom-half as the signaller thread. As per usual, only the oldest
+ * waiter (not just signaller) is tasked as the bottom-half waking
+ * up all completed waiters after the user interrupt.
+ *
+ * If we are the oldest waiter, enable the irq (after which we
+ * must double check that the seqno did not complete).
+ */
+ if (intel_engine_add_wait(engine, &signal->wait) &&
+ intel_engine_enable_wait_irq(engine, &signal->wait))
+ wake_up_process(task);
+
+ return 0;
+}
+
void intel_engine_fini_breadcrumbs(struct intel_engine_cs *engine)
{
struct intel_breadcrumbs *b = &engine->breadcrumbs;

+ if (b->signaler)
+ kthread_stop(b->signaler);
+
del_timer_sync(&b->fake_irq);
}

diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 8f305ce253ae..ba81052999fa 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -145,6 +145,8 @@ struct i915_ctx_workarounds {
struct drm_i915_gem_object *obj;
};

+struct drm_i915_gem_request;
+
struct intel_engine_cs {
const char *name;
enum intel_ring_id {
@@ -181,7 +183,10 @@ struct intel_engine_cs {
struct intel_breadcrumbs {
spinlock_t lock; /* protects the lists of requests */
struct rb_root waiters; /* sorted by retirement, priority */
+ struct rb_root signals; /* sorted by retirement */
struct task_struct *first_waiter; /* bh for user interrupts */
+ struct task_struct *signaler; /* used for fence signalling */
+ void *first_signal;
struct timer_list fake_irq; /* used after a missed interrupt */
bool irq_enabled;
bool rpm_wakelock;
@@ -200,7 +205,6 @@ struct intel_engine_cs {
unsigned irq_refcount; /* protected by dev_priv->irq_lock */
bool irq_posted;
u32 irq_enable_mask; /* bitmask to enable ring interrupt */
- struct drm_i915_gem_request *trace_irq_req;
bool __must_check (*irq_get)(struct intel_engine_cs *ring);
void (*irq_put)(struct intel_engine_cs *ring);

@@ -558,6 +562,7 @@ bool intel_engine_enable_wait_irq(struct intel_engine_cs *engine,
const struct intel_wait *wait);
void intel_engine_remove_wait(struct intel_engine_cs *engine,
struct intel_wait *wait);
+int intel_engine_enable_signaling(struct drm_i915_gem_request *request);
static inline bool intel_engine_has_waiter(struct intel_engine_cs *engine)
{
return READ_ONCE(engine->breadcrumbs.first_waiter);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:45 UTC
Permalink
Rather than persistently postponing the idle-work everytime somebody
calls i915_gem_retire_requests() (potentially ensuring that we never
reach the idle state), queue the work the first time we detect all
requests are complete. Then if in 100ms, more requests have been queued,
we will abort the idle-worker and wait again until all the new requests
have been completed.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 3788fce136f3..efd46adb978b 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2946,9 +2946,9 @@ i915_gem_retire_requests(struct drm_device *dev)
}

if (idle)
- mod_delayed_work(dev_priv->wq,
- &dev_priv->mm.idle_work,
- msecs_to_jiffies(100));
+ queue_delayed_work(dev_priv->wq,
+ &dev_priv->mm.idle_work,
+ msecs_to_jiffies(100));
}

static void
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:35 UTC
Permalink
When reading from the HWS page, we use barrier() to prevent the compiler
optimising away the read from the volatile (may be updated by the GPU)
memory address. This is more suited to READ_ONCE(); make it so.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: Daniel Vetter <***@ffwll.ch>
---
drivers/gpu/drm/i915/intel_ringbuffer.h | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 6cc8e9c5f8d6..8f305ce253ae 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -418,8 +418,7 @@ intel_read_status_page(struct intel_engine_cs *ring,
int reg)
{
/* Ensure that the compiler doesn't optimize away the load. */
- barrier();
- return ring->status_page.page_addr[reg];
+ return READ_ONCE(ring->status_page.page_addr[reg]);
}

static inline void
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:16 UTC
Permalink
Space for flushing the GPU cache prior to completing the request is
preallocated and so cannot fail.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem_context.c | 2 +-
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 9 +---
drivers/gpu/drm/i915/i915_gem_gtt.c | 18 ++++----
drivers/gpu/drm/i915/i915_gem_request.c | 7 ++-
drivers/gpu/drm/i915/intel_lrc.c | 47 +++----------------
drivers/gpu/drm/i915/intel_lrc.h | 2 -
drivers/gpu/drm/i915/intel_ringbuffer.c | 72 +++++++-----------------------
drivers/gpu/drm/i915/intel_ringbuffer.h | 7 ---
8 files changed, 39 insertions(+), 125 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 17fe8ed991d6..c078ebc29da5 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -534,7 +534,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
* itlb_before_ctx_switch.
*/
if (IS_GEN6(req->i915)) {
- ret = req->engine->flush(req, I915_GEM_GPU_DOMAINS, 0);
+ ret = req->engine->emit_flush(req, I915_GEM_GPU_DOMAINS, 0);
if (ret)
return ret;
}
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 731ce13dbdbc..a56fae99a1bc 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -969,10 +969,8 @@ i915_gem_execbuffer_move_to_gpu(struct drm_i915_gem_request *req,
if (flush_domains & I915_GEM_DOMAIN_GTT)
wmb();

- /* Unconditionally invalidate gpu caches and ensure that we do flush
- * any residual writes from the previous batch.
- */
- return intel_engine_invalidate_all_caches(req);
+ /* Unconditionally invalidate gpu caches and TLBs. */
+ return req->engine->emit_flush(req, I915_GEM_GPU_DOMAINS, 0);
}

static bool
@@ -1138,9 +1136,6 @@ i915_gem_execbuffer_move_to_active(struct list_head *vmas,
static void
i915_gem_execbuffer_retire_commands(struct i915_execbuffer_params *params)
{
- /* Unconditionally force add_request to emit a full flush. */
- params->ring->gpu_caches_dirty = true;
-
/* Add a breadcrumb for the completion of the batch buffer */
__i915_add_request(params->request, params->batch_obj, true);
}
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 9a91451d66ac..cddbd8c00663 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -1652,9 +1652,9 @@ static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
int ret;

/* NB: TLBs must be flushed and invalidated before a switch */
- ret = req->engine->flush(req,
- I915_GEM_GPU_DOMAINS,
- I915_GEM_GPU_DOMAINS);
+ ret = req->engine->emit_flush(req,
+ I915_GEM_GPU_DOMAINS,
+ I915_GEM_GPU_DOMAINS);
if (ret)
return ret;

@@ -1690,9 +1690,9 @@ static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
int ret;

/* NB: TLBs must be flushed and invalidated before a switch */
- ret = req->engine->flush(req,
- I915_GEM_GPU_DOMAINS,
- I915_GEM_GPU_DOMAINS);
+ ret = req->engine->emit_flush(req,
+ I915_GEM_GPU_DOMAINS,
+ I915_GEM_GPU_DOMAINS);
if (ret)
return ret;

@@ -1710,9 +1710,9 @@ static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,

/* XXX: RCS is the only one to auto invalidate the TLBs? */
if (req->engine->id != RCS) {
- ret = req->engine->flush(req,
- I915_GEM_GPU_DOMAINS,
- I915_GEM_GPU_DOMAINS);
+ ret = req->engine->emit_flush(req,
+ I915_GEM_GPU_DOMAINS,
+ I915_GEM_GPU_DOMAINS);
if (ret)
return ret;
}
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index e1f2af046b6c..e911430575fe 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -426,10 +426,9 @@ void __i915_add_request(struct drm_i915_gem_request *request,
* what.
*/
if (flush_caches) {
- if (i915.enable_execlists)
- ret = logical_ring_flush_all_caches(request);
- else
- ret = intel_engine_flush_all_caches(request);
+ ret = request->engine->emit_flush(request,
+ 0, I915_GEM_GPU_DOMAINS);
+
/* Not allowed to fail! */
WARN(ret, "*_ring_flush_all_caches failed: %d!\n", ret);
}
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 3a80d9d45f5c..b889680f7491 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -616,24 +616,6 @@ static int execlists_context_queue(struct drm_i915_gem_request *request)
return 0;
}

-static int logical_ring_invalidate_all_caches(struct drm_i915_gem_request *req)
-{
- struct intel_engine_cs *engine = req->engine;
- uint32_t flush_domains;
- int ret;
-
- flush_domains = 0;
- if (engine->gpu_caches_dirty)
- flush_domains = I915_GEM_GPU_DOMAINS;
-
- ret = engine->emit_flush(req, I915_GEM_GPU_DOMAINS, flush_domains);
- if (ret)
- return ret;
-
- engine->gpu_caches_dirty = false;
- return 0;
-}
-
static int execlists_move_to_gpu(struct drm_i915_gem_request *req,
struct list_head *vmas)
{
@@ -664,7 +646,7 @@ static int execlists_move_to_gpu(struct drm_i915_gem_request *req,
/* Unconditionally invalidate gpu caches and ensure that we do flush
* any residual writes from the previous batch.
*/
- return logical_ring_invalidate_all_caches(req);
+ return req->engine->emit_flush(req, I915_GEM_GPU_DOMAINS, 0);
}

int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request)
@@ -860,22 +842,6 @@ void intel_logical_ring_stop(struct intel_engine_cs *ring)
I915_WRITE_MODE(ring, _MASKED_BIT_DISABLE(STOP_RING));
}

-int logical_ring_flush_all_caches(struct drm_i915_gem_request *req)
-{
- struct intel_engine_cs *engine = req->engine;
- int ret;
-
- if (!engine->gpu_caches_dirty)
- return 0;
-
- ret = engine->emit_flush(req, 0, I915_GEM_GPU_DOMAINS);
- if (ret)
- return ret;
-
- engine->gpu_caches_dirty = false;
- return 0;
-}
-
static int intel_lr_context_do_pin(struct intel_engine_cs *ring,
struct drm_i915_gem_object *ctx_obj,
struct intel_ring *ringbuf)
@@ -946,7 +912,6 @@ void intel_lr_context_unpin(struct drm_i915_gem_request *rq)
static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
{
int ret, i;
- struct intel_engine_cs *engine = req->engine;
struct intel_ring *ring = req->ring;
struct drm_i915_private *dev_priv = req->i915;
struct i915_workarounds *w = &dev_priv->workarounds;
@@ -954,8 +919,9 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
if (w->count == 0)
return 0;

- engine->gpu_caches_dirty = true;
- ret = logical_ring_flush_all_caches(req);
+ ret = req->engine->emit_flush(req,
+ I915_GEM_GPU_DOMAINS,
+ I915_GEM_GPU_DOMAINS);
if (ret)
return ret;

@@ -972,8 +938,9 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)

intel_ring_advance(ring);

- engine->gpu_caches_dirty = true;
- ret = logical_ring_flush_all_caches(req);
+ ret = req->engine->emit_flush(req,
+ I915_GEM_GPU_DOMAINS,
+ I915_GEM_GPU_DOMAINS);
if (ret)
return ret;

diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index c88988a41898..7f01d2ddacfa 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -60,8 +60,6 @@ void intel_logical_ring_stop(struct intel_engine_cs *ring);
void intel_logical_ring_cleanup(struct intel_engine_cs *ring);
int intel_logical_rings_init(struct drm_device *dev);

-int logical_ring_flush_all_caches(struct drm_i915_gem_request *req);
-
/* Logical Ring Contexts */

/* One extra page is added before LRC for GuC as shared data */
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 74a4a54e6ca5..e584b0f631f8 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -696,8 +696,9 @@ static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
if (w->count == 0)
return 0;

- req->engine->gpu_caches_dirty = true;
- ret = intel_engine_flush_all_caches(req);
+ ret = req->engine->emit_flush(req,
+ I915_GEM_GPU_DOMAINS,
+ I915_GEM_GPU_DOMAINS);
if (ret)
return ret;

@@ -714,8 +715,9 @@ static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)

intel_ring_advance(ring);

- req->engine->gpu_caches_dirty = true;
- ret = intel_engine_flush_all_caches(req);
+ ret = req->engine->emit_flush(req,
+ I915_GEM_GPU_DOMAINS,
+ I915_GEM_GPU_DOMAINS);
if (ret)
return ret;

@@ -2509,7 +2511,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)

ring->init_context = intel_rcs_ctx_init;
ring->add_request = gen6_add_request;
- ring->flush = gen8_render_ring_flush;
+ ring->emit_flush = gen8_render_ring_flush;
ring->irq_enable = gen8_ring_enable_irq;
ring->irq_disable = gen8_ring_disable_irq;
ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
@@ -2523,9 +2525,9 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
} else if (INTEL_INFO(dev)->gen >= 6) {
ring->init_context = intel_rcs_ctx_init;
ring->add_request = gen6_add_request;
- ring->flush = gen7_render_ring_flush;
+ ring->emit_flush = gen7_render_ring_flush;
if (INTEL_INFO(dev)->gen == 6)
- ring->flush = gen6_render_ring_flush;
+ ring->emit_flush = gen6_render_ring_flush;
ring->irq_enable = gen6_ring_enable_irq;
ring->irq_disable = gen6_ring_disable_irq;
ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
@@ -2553,7 +2555,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
}
} else if (IS_GEN5(dev)) {
ring->add_request = pc_render_add_request;
- ring->flush = gen4_render_ring_flush;
+ ring->emit_flush = gen4_render_ring_flush;
ring->irq_enable = gen5_ring_enable_irq;
ring->irq_disable = gen5_ring_disable_irq;
ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT |
@@ -2561,9 +2563,9 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
} else {
ring->add_request = i9xx_add_request;
if (INTEL_INFO(dev)->gen < 4)
- ring->flush = gen2_render_ring_flush;
+ ring->emit_flush = gen2_render_ring_flush;
else
- ring->flush = gen4_render_ring_flush;
+ ring->emit_flush = gen4_render_ring_flush;
if (IS_GEN2(dev)) {
ring->irq_enable = i8xx_ring_enable_irq;
ring->irq_disable = i8xx_ring_disable_irq;
@@ -2636,7 +2638,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
/* gen6 bsd needs a special wa for tail updates */
if (IS_GEN6(dev))
ring->write_tail = gen6_bsd_ring_write_tail;
- ring->flush = gen6_bsd_ring_flush;
+ ring->emit_flush = gen6_bsd_ring_flush;
ring->add_request = gen6_add_request;
ring->irq_seqno_barrier = gen6_seqno_barrier;
if (INTEL_INFO(dev)->gen >= 8) {
@@ -2674,7 +2676,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
}
} else {
ring->mmio_base = BSD_RING_BASE;
- ring->flush = bsd_ring_flush;
+ ring->emit_flush = bsd_ring_flush;
ring->add_request = i9xx_add_request;
if (IS_GEN5(dev)) {
ring->irq_enable_mask = ILK_BSD_USER_INTERRUPT;
@@ -2705,7 +2707,7 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev)

ring->write_tail = ring_write_tail;
ring->mmio_base = GEN8_BSD2_RING_BASE;
- ring->flush = gen6_bsd_ring_flush;
+ ring->emit_flush = gen6_bsd_ring_flush;
ring->add_request = gen6_add_request;
ring->irq_seqno_barrier = gen6_seqno_barrier;
ring->irq_enable_mask =
@@ -2734,7 +2736,7 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)

ring->mmio_base = BLT_RING_BASE;
ring->write_tail = ring_write_tail;
- ring->flush = gen6_ring_flush;
+ ring->emit_flush = gen6_ring_flush;
ring->add_request = gen6_add_request;
ring->irq_seqno_barrier = gen6_seqno_barrier;
if (INTEL_INFO(dev)->gen >= 8) {
@@ -2790,7 +2792,7 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)

ring->mmio_base = VEBOX_RING_BASE;
ring->write_tail = ring_write_tail;
- ring->flush = gen6_ring_flush;
+ ring->emit_flush = gen6_ring_flush;
ring->add_request = gen6_add_request;
ring->irq_seqno_barrier = gen6_seqno_barrier;

@@ -2830,46 +2832,6 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
return intel_init_engine(dev, ring);
}

-int
-intel_engine_flush_all_caches(struct drm_i915_gem_request *req)
-{
- struct intel_engine_cs *engine = req->engine;
- int ret;
-
- if (!engine->gpu_caches_dirty)
- return 0;
-
- ret = engine->flush(req, 0, I915_GEM_GPU_DOMAINS);
- if (ret)
- return ret;
-
- trace_i915_gem_ring_flush(req, 0, I915_GEM_GPU_DOMAINS);
-
- engine->gpu_caches_dirty = false;
- return 0;
-}
-
-int
-intel_engine_invalidate_all_caches(struct drm_i915_gem_request *req)
-{
- struct intel_engine_cs *engine = req->engine;
- uint32_t flush_domains;
- int ret;
-
- flush_domains = 0;
- if (engine->gpu_caches_dirty)
- flush_domains = I915_GEM_GPU_DOMAINS;
-
- ret = engine->flush(req, I915_GEM_GPU_DOMAINS, flush_domains);
- if (ret)
- return ret;
-
- trace_i915_gem_ring_flush(req, I915_GEM_GPU_DOMAINS, flush_domains);
-
- engine->gpu_caches_dirty = false;
- return 0;
-}
-
void
intel_engine_stop(struct intel_engine_cs *ring)
{
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 15d067b9b8a2..fdeadae726b8 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -215,9 +215,6 @@ struct intel_engine_cs {

void (*write_tail)(struct intel_engine_cs *ring,
u32 value);
- int __must_check (*flush)(struct drm_i915_gem_request *req,
- u32 invalidate_domains,
- u32 flush_domains);
int (*add_request)(struct drm_i915_gem_request *req);
/* Some chipsets are not quite as coherent as advertised and need
* an expensive kick to force a true read of the up-to-date seqno.
@@ -332,8 +329,6 @@ struct intel_engine_cs {
u32 last_submitted_seqno;
unsigned user_interrupts;

- bool gpu_caches_dirty;
-
struct intel_context *default_context;
struct intel_context *last_context;

@@ -486,8 +481,6 @@ int intel_ring_space(struct intel_ring *ringbuf);

int __must_check intel_engine_idle(struct intel_engine_cs *ring);
void intel_engine_init_seqno(struct intel_engine_cs *ring, u32 seqno);
-int intel_engine_flush_all_caches(struct drm_i915_gem_request *req);
-int intel_engine_invalidate_all_caches(struct drm_i915_gem_request *req);

void intel_fini_pipe_control(struct intel_engine_cs *ring);
int intel_init_pipe_control(struct intel_engine_cs *ring);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:25 UTC
Permalink
We can forgo queuing the hangcheck from the start of every request to
until we wait upon a request. This reduces the overhead of every
request, but may increase the latency of detecting a hang. Howeever, if
nothing every waits upon a hang, did it ever hang? It also improves the
robustness of the wait-request by ensuring that the hangchecker is
indeed running before we sleep indefinitely (and thereby ensuring that
we never actually sleep forever waiting for a dead GPU).

v2: Also queue the hangcheck from retire work in case the GPU become
stuck when no one is watching.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_drv.h | 2 +-
drivers/gpu/drm/i915/i915_gem.c | 13 ++++++++-----
drivers/gpu/drm/i915/i915_irq.c | 9 ++++-----
3 files changed, 13 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index bbdb056d2a8e..d9d411919779 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2710,7 +2710,7 @@ void intel_hpd_cancel_work(struct drm_i915_private *dev_priv);
bool intel_hpd_pin_to_port(enum hpd_pin pin, enum port *port);

/* i915_irq.c */
-void i915_queue_hangcheck(struct drm_device *dev);
+void i915_queue_hangcheck(struct drm_i915_private *dev_priv);
__printf(3, 4)
void i915_handle_error(struct drm_device *dev, bool wedged,
const char *fmt, ...);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index f570990f03e0..b4da8b354a3b 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1306,6 +1306,9 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
break;
}

+ /* Ensure that even if the GPU hangs, we get woken up. */
+ i915_queue_hangcheck(dev_priv);
+
timer.function = NULL;
if (timeout || missed_irq(dev_priv, ring)) {
unsigned long expire;
@@ -2592,8 +2595,6 @@ void __i915_add_request(struct drm_i915_gem_request *request,

trace_i915_gem_request_add(request);

- i915_queue_hangcheck(ring->dev);
-
queue_delayed_work(dev_priv->wq,
&dev_priv->mm.retire_work,
round_jiffies_up_relative(HZ));
@@ -2947,8 +2948,8 @@ i915_gem_retire_requests(struct drm_device *dev)

if (idle)
mod_delayed_work(dev_priv->wq,
- &dev_priv->mm.idle_work,
- msecs_to_jiffies(100));
+ &dev_priv->mm.idle_work,
+ msecs_to_jiffies(100));

return idle;
}
@@ -2967,9 +2968,11 @@ i915_gem_retire_work_handler(struct work_struct *work)
idle = i915_gem_retire_requests(dev);
mutex_unlock(&dev->struct_mutex);
}
- if (!idle)
+ if (!idle) {
+ i915_queue_hangcheck(dev_priv);
queue_delayed_work(dev_priv->wq, &dev_priv->mm.retire_work,
round_jiffies_up_relative(HZ));
+ }
}

static void
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 15973e917566..94f5f4e99446 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -3165,18 +3165,17 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
goto out;
}

+ /* Reset timer in case GPU hangs without another request being added */
if (busy_count)
- /* Reset timer case chip hangs without another request
- * being added */
- i915_queue_hangcheck(dev);
+ i915_queue_hangcheck(dev_priv);

out:
ENABLE_RPM_WAKEREF_ASSERTS(dev_priv);
}

-void i915_queue_hangcheck(struct drm_device *dev)
+void i915_queue_hangcheck(struct drm_i915_private *dev_priv)
{
- struct i915_gpu_error *e = &to_i915(dev)->gpu_error;
+ struct i915_gpu_error *e = &dev_priv->gpu_error;

if (!i915.enable_hangcheck)
return;
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:50 UTC
Permalink
Now that we have (near) universal GPU recovery code, we can inject a
real hang from userspace and not need any fakery. Not only does this
mean that the testing is far more realistic, but we can simplify the
kernel in the process.

v2: Replace the i915_stop_rings with a dummy implementation as igt
encodified its existence until we can release an update.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 19 +------------------
drivers/gpu/drm/i915/i915_drv.c | 17 ++---------------
drivers/gpu/drm/i915/i915_drv.h | 19 -------------------
drivers/gpu/drm/i915/i915_gem.c | 13 +++----------
drivers/gpu/drm/i915/intel_lrc.c | 5 -----
drivers/gpu/drm/i915/intel_ringbuffer.c | 8 --------
drivers/gpu/drm/i915/intel_ringbuffer.h | 1 -
7 files changed, 6 insertions(+), 76 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 567f8db4c70a..6172649b7e56 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -4752,30 +4752,13 @@ DEFINE_SIMPLE_ATTRIBUTE(i915_wedged_fops,
static int
i915_ring_stop_get(void *data, u64 *val)
{
- struct drm_device *dev = data;
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- *val = dev_priv->gpu_error.stop_rings;
-
+ *val = 0;
return 0;
}

static int
i915_ring_stop_set(void *data, u64 val)
{
- struct drm_device *dev = data;
- struct drm_i915_private *dev_priv = dev->dev_private;
- int ret;
-
- DRM_DEBUG_DRIVER("Stopping rings 0x%08llx\n", val);
-
- ret = mutex_lock_interruptible(&dev->struct_mutex);
- if (ret)
- return ret;
-
- dev_priv->gpu_error.stop_rings = val;
- mutex_unlock(&dev->struct_mutex);
-
return 0;
}

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 442e1217e442..e9f85fd0542f 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -891,24 +891,11 @@ int i915_reset(struct drm_device *dev)
goto error;
}

+ pr_notice("drm/i915: Resetting chip after gpu hang\n");
+
i915_gem_reset(dev);

ret = intel_gpu_reset(dev);
-
- /* Also reset the gpu hangman. */
- if (error->stop_rings != 0) {
- DRM_INFO("Simulated gpu hang, resetting stop_rings\n");
- error->stop_rings = 0;
- if (ret == -ENODEV) {
- DRM_INFO("Reset not implemented, but ignoring "
- "error for simulated gpu hangs\n");
- ret = 0;
- }
- }
-
- if (i915_stop_ring_allow_warn(dev_priv))
- pr_notice("drm/i915: Resetting chip after gpu hang\n");
-
if (ret) {
if (ret != -ENODEV)
DRM_ERROR("Failed to reset chip: %i\n", ret);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 9ec6f3e9e74d..c3b795f1566b 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1371,13 +1371,6 @@ struct i915_gpu_error {
*/
wait_queue_head_t reset_queue;

- /* Userspace knobs for gpu hang simulation;
- * combines both a ring mask, and extra flags
- */
- u32 stop_rings;
-#define I915_STOP_RING_ALLOW_BAN (1 << 31)
-#define I915_STOP_RING_ALLOW_WARN (1 << 30)
-
/* For missed irq/seqno simulation. */
unsigned long test_irq_rings;
};
@@ -3030,18 +3023,6 @@ static inline u32 i915_reset_count(struct i915_gpu_error *error)
return ((i915_reset_counter(error) & ~I915_WEDGED) + 1) / 2;
}

-static inline bool i915_stop_ring_allow_ban(struct drm_i915_private *dev_priv)
-{
- return dev_priv->gpu_error.stop_rings == 0 ||
- dev_priv->gpu_error.stop_rings & I915_STOP_RING_ALLOW_BAN;
-}
-
-static inline bool i915_stop_ring_allow_warn(struct drm_i915_private *dev_priv)
-{
- return dev_priv->gpu_error.stop_rings == 0 ||
- dev_priv->gpu_error.stop_rings & I915_STOP_RING_ALLOW_WARN;
-}
-
void i915_gem_reset(struct drm_device *dev);
bool i915_gem_clflush_object(struct drm_i915_gem_object *obj, bool force);
int __must_check i915_gem_init(struct drm_device *dev);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 3948e85eaa48..ea9344503bf6 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2633,21 +2633,14 @@ static bool i915_context_is_banned(struct drm_i915_private *dev_priv,
{
unsigned long elapsed;

- elapsed = get_seconds() - ctx->hang_stats.guilty_ts;
-
if (ctx->hang_stats.banned)
return true;

+ elapsed = get_seconds() - ctx->hang_stats.guilty_ts;
if (ctx->hang_stats.ban_period_seconds &&
elapsed <= ctx->hang_stats.ban_period_seconds) {
- if (!i915_gem_context_is_default(ctx)) {
- DRM_DEBUG("context hanging too fast, banning!\n");
- return true;
- } else if (i915_stop_ring_allow_ban(dev_priv)) {
- if (i915_stop_ring_allow_warn(dev_priv))
- DRM_ERROR("gpu hanging too fast, banning!\n");
- return true;
- }
+ DRM_DEBUG("context hanging too fast, banning!\n");
+ return true;
}

return false;
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index b1ede2e9b372..b634e7d7a92b 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -756,16 +756,11 @@ static int logical_ring_wait_for_space(struct drm_i915_gem_request *req,
static void
intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
{
- struct intel_engine_cs *ring = request->ring;
struct drm_i915_private *dev_priv = request->i915;

intel_logical_ring_advance(request->ringbuf);
-
request->tail = request->ringbuf->tail;

- if (intel_ring_stopped(ring))
- return;
-
if (dev_priv->guc.execbuf_client)
i915_guc_submit(dev_priv->guc.execbuf_client, request);
else
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 5625f56a2db1..d9bb6458fa60 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -59,18 +59,10 @@ int intel_ring_space(struct intel_ringbuffer *ringbuf)
return ringbuf->space;
}

-bool intel_ring_stopped(struct intel_engine_cs *ring)
-{
- struct drm_i915_private *dev_priv = ring->dev->dev_private;
- return dev_priv->gpu_error.stop_rings & intel_ring_flag(ring);
-}
-
static void __intel_ring_advance(struct intel_engine_cs *ring)
{
struct intel_ringbuffer *ringbuf = ring->buffer;
ringbuf->tail &= ringbuf->size - 1;
- if (intel_ring_stopped(ring))
- return;
ring->write_tail(ring, ringbuf->tail);
}

diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 73da75fa47c1..eecf9c7ae2b8 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -487,7 +487,6 @@ static inline void intel_ring_advance(struct intel_engine_cs *ring)
int __intel_ring_space(int head, int tail, int size);
void intel_ring_update_space(struct intel_ringbuffer *ringbuf);
int intel_ring_space(struct intel_ringbuffer *ringbuf);
-bool intel_ring_stopped(struct intel_engine_cs *ring);

int __must_check intel_ring_idle(struct intel_engine_cs *ring);
void intel_ring_init_seqno(struct intel_engine_cs *ring, u32 seqno);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:33 UTC
Permalink
If we have multiple waiters, we may find that many complete on the same
wake up. If we first inspect the seqno from the CPU cache, we may reduce
the number of heavyweight coherent seqno reads we require.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_drv.h | 14 ++++++++++----
1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index fcedcbc50834..c2ee8efdd928 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3632,6 +3632,12 @@ static inline bool __i915_request_irq_complete(struct drm_i915_gem_request *req)
{
struct intel_engine_cs *engine = req->ring;

+ /* Before we do the heavier coherent read of the seqno,
+ * check the value (hopefully) in the CPU cacheline.
+ */
+ if (i915_gem_request_completed(req))
+ return true;
+
/* Ensure our read of the seqno is coherent so that we
* do not "miss an interrupt" (i.e. if this is the last
* request and the seqno write from the GPU is not visible
@@ -3643,11 +3649,11 @@ static inline bool __i915_request_irq_complete(struct drm_i915_gem_request *req)
* but it is easier and safer to do it every time the waiter
* is woken.
*/
- if (engine->irq_seqno_barrier)
+ if (engine->irq_seqno_barrier) {
engine->irq_seqno_barrier(engine);
-
- if (i915_gem_request_completed(req))
- return true;
+ if (i915_gem_request_completed(req))
+ return true;
+ }

/* We need to check whether any gpu reset happened in between
* the request being submitted and now. If a reset has occurred,
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:44 UTC
Permalink
The retire worker is a low frequency task that makes sure we retire
outstanding requests if userspace is being lax. We only need to start it
once as it remains active until the GPU is idle, so do a cheap test
before the more expensive queue_work(). A consequence of this is that we
need correct locking in the worker to make the hot path of request
submission cheap. To keep the symmetry and keep hangcheck strictly bound
by the GPU's wakelock, we move the cancel_sync(hangcheck) to the idle
worker before dropping the wakelock.

v2: Guard against RCU fouling the breadcrumbs bottom-half whilst we kick
the waiter.
v3: Remove the wakeref assertion squelching (now we hold a wakeref for
the hangcheck, any rpm error there is genuine).

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
References: https://bugs.freedesktop.org/show_bug.cgi?id=88437
---
drivers/gpu/drm/i915/i915_drv.c | 2 -
drivers/gpu/drm/i915/i915_drv.h | 2 +-
drivers/gpu/drm/i915/i915_gem.c | 83 ++++++++++++++++++++----------
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 6 +++
drivers/gpu/drm/i915/i915_irq.c | 16 +-----
drivers/gpu/drm/i915/intel_display.c | 29 -----------
6 files changed, 66 insertions(+), 72 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 5160f1414de4..4c090f1cf69c 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -1490,8 +1490,6 @@ static int intel_runtime_suspend(struct device *device)
i915_gem_release_all_mmaps(dev_priv);
mutex_unlock(&dev->struct_mutex);

- cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work);
-
intel_guc_suspend(dev);

intel_suspend_gt_powersave(dev);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 7f021505e32f..9ec6f3e9e74d 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2987,7 +2987,7 @@ int __must_check i915_gem_set_seqno(struct drm_device *dev, u32 seqno);
struct drm_i915_gem_request *
i915_gem_find_active_request(struct intel_engine_cs *ring);

-bool i915_gem_retire_requests(struct drm_device *dev);
+void i915_gem_retire_requests(struct drm_device *dev);
void i915_gem_retire_requests_ring(struct intel_engine_cs *ring);

static inline u32 i915_reset_counter(struct i915_gpu_error *error)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 5ddb2ed0f785..3788fce136f3 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2479,6 +2479,37 @@ i915_gem_get_seqno(struct drm_device *dev, u32 *seqno)
return 0;
}

+static void i915_gem_mark_busy(struct drm_i915_private *dev_priv)
+{
+ if (dev_priv->mm.busy)
+ return;
+
+ intel_runtime_pm_get_noresume(dev_priv);
+
+ i915_update_gfx_val(dev_priv);
+ if (INTEL_INFO(dev_priv)->gen >= 6)
+ gen6_rps_busy(dev_priv);
+
+ queue_delayed_work(dev_priv->wq,
+ &dev_priv->mm.retire_work,
+ round_jiffies_up_relative(HZ));
+
+ dev_priv->mm.busy = true;
+}
+
+static void i915_gem_mark_idle(struct drm_i915_private *dev_priv)
+{
+ dev_priv->mm.busy = false;
+
+ if (cancel_delayed_work_sync(&dev_priv->gpu_error.hangcheck_work))
+ intel_kick_waiters(dev_priv);
+
+ if (INTEL_INFO(dev_priv)->gen >= 6)
+ gen6_rps_idle(dev_priv);
+
+ intel_runtime_pm_put(dev_priv);
+}
+
/*
* NB: This function is not allowed to fail. Doing so would mean the the
* request is not being tracked for completion but the work itself is
@@ -2559,10 +2590,7 @@ void __i915_add_request(struct drm_i915_gem_request *request,

trace_i915_gem_request_add(request);

- queue_delayed_work(dev_priv->wq,
- &dev_priv->mm.retire_work,
- round_jiffies_up_relative(HZ));
- intel_mark_busy(dev_priv->dev);
+ i915_gem_mark_busy(dev_priv);

/* Sanity check that the reserved size was large enough. */
intel_ring_reserved_space_end(ringbuf);
@@ -2892,7 +2920,7 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
WARN_ON(i915_verify_lists(ring->dev));
}

-bool
+void
i915_gem_retire_requests(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
@@ -2900,6 +2928,9 @@ i915_gem_retire_requests(struct drm_device *dev)
bool idle = true;
int i;

+ if (!dev_priv->mm.busy)
+ return;
+
for_each_ring(ring, dev_priv, i) {
i915_gem_retire_requests_ring(ring);
idle &= list_empty(&ring->request_list);
@@ -2918,8 +2949,6 @@ i915_gem_retire_requests(struct drm_device *dev)
mod_delayed_work(dev_priv->wq,
&dev_priv->mm.idle_work,
msecs_to_jiffies(100));
-
- return idle;
}

static void
@@ -2928,17 +2957,21 @@ i915_gem_retire_work_handler(struct work_struct *work)
struct drm_i915_private *dev_priv =
container_of(work, typeof(*dev_priv), mm.retire_work.work);
struct drm_device *dev = dev_priv->dev;
- bool idle;

/* Come back later if the device is busy... */
- idle = false;
if (mutex_trylock(&dev->struct_mutex)) {
- idle = i915_gem_retire_requests(dev);
+ i915_gem_retire_requests(dev);
mutex_unlock(&dev->struct_mutex);
}
- if (!idle) {
+
+ /* Keep the retire handler running until we are finally idle.
+ * We do not need to do this test under locking as in the worst-case
+ * we queue the retire worker once too often.
+ */
+ if (READ_ONCE(dev_priv->mm.busy)) {
i915_queue_hangcheck(dev_priv);
- queue_delayed_work(dev_priv->wq, &dev_priv->mm.retire_work,
+ queue_delayed_work(dev_priv->wq,
+ &dev_priv->mm.retire_work,
round_jiffies_up_relative(HZ));
}
}
@@ -2952,25 +2985,23 @@ i915_gem_idle_work_handler(struct work_struct *work)
struct intel_engine_cs *ring;
int i;

- for_each_ring(ring, dev_priv, i)
- if (!list_empty(&ring->request_list))
- return;
+ if (!mutex_trylock(&dev->struct_mutex))
+ return;

- /* we probably should sync with hangcheck here, using cancel_work_sync.
- * Also locking seems to be fubar here, ring->request_list is protected
- * by dev->struct_mutex. */
+ if (!dev_priv->mm.busy)
+ goto out;

- intel_mark_idle(dev);
+ for_each_ring(ring, dev_priv, i) {
+ if (!list_empty(&ring->request_list))
+ goto out;

- if (mutex_trylock(&dev->struct_mutex)) {
- struct intel_engine_cs *ring;
- int i;
+ i915_gem_batch_pool_fini(&ring->batch_pool);
+ }

- for_each_ring(ring, dev_priv, i)
- i915_gem_batch_pool_fini(&ring->batch_pool);
+ i915_gem_mark_idle(dev_priv);

- mutex_unlock(&dev->struct_mutex);
- }
+out:
+ mutex_unlock(&dev->struct_mutex);
}

/**
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index b8186bd061c1..da1c6fe5b40e 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1475,6 +1475,12 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
dispatch_flags |= I915_DISPATCH_RS;
}

+ /* Take a local wakeref for preparing to dispatch the execbuf as
+ * we expect to access the hardware fairly frequently in the
+ * process. Upon first dispatch, we acquire another prolonged
+ * wakeref that we hold until the GPU has been idle for at least
+ * 100ms.
+ */
intel_runtime_pm_get(dev_priv);

ret = i915_mutex_lock_interruptible(dev);
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 502663f13cd8..8866e981bcba 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -3047,13 +3047,6 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
if (!i915.enable_hangcheck)
return;

- /*
- * The hangcheck work is synced during runtime suspend, we don't
- * require a wakeref. TODO: instead of disabling the asserts make
- * sure that we hold a reference when this work is running.
- */
- DISABLE_RPM_WAKEREF_ASSERTS(dev_priv);
-
/* As enabling the GPU requires fairly extensive mmio access,
* periodically arm the mmio checker to see if we are triggering
* any invalid access.
@@ -3157,17 +3150,12 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
}
}

- if (rings_hung) {
- i915_handle_error(dev, true, "Ring hung");
- goto out;
- }
+ if (rings_hung)
+ return i915_handle_error(dev, true, "Ring hung");

/* Reset timer in case GPU hangs without another request being added */
if (busy_count)
i915_queue_hangcheck(dev_priv);
-
-out:
- ENABLE_RPM_WAKEREF_ASSERTS(dev_priv);
}

static void ibx_irq_reset(struct drm_device *dev)
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index de4d4a0d923a..8e646780c971 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -10874,35 +10874,6 @@ struct drm_display_mode *intel_crtc_mode_get(struct drm_device *dev,
return mode;
}

-void intel_mark_busy(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- if (dev_priv->mm.busy)
- return;
-
- intel_runtime_pm_get(dev_priv);
- i915_update_gfx_val(dev_priv);
- if (INTEL_INFO(dev)->gen >= 6)
- gen6_rps_busy(dev_priv);
- dev_priv->mm.busy = true;
-}
-
-void intel_mark_idle(struct drm_device *dev)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- if (!dev_priv->mm.busy)
- return;
-
- dev_priv->mm.busy = false;
-
- if (INTEL_INFO(dev)->gen >= 6)
- gen6_rps_idle(dev->dev_private);
-
- intel_runtime_pm_put(dev_priv);
-}
-
static void intel_crtc_destroy(struct drm_crtc *crtc)
{
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:29 UTC
Permalink
Initialising the global GTT is tricky as we wish to use the drm_mm range
manager during the modesetting initialisation (to capture stolen
allocations from the BIOS) before we actually enable GEM. To overcome
this, we currently setup the drm_mm first and then carefully rebind
them.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_dma.c | 2 ++
drivers/gpu/drm/i915/i915_gem.c | 5 +--
drivers/gpu/drm/i915/i915_gem_gtt.c | 62 +++++++++++-----------------------
drivers/gpu/drm/i915/i915_gem_gtt.h | 1 +
drivers/gpu/drm/i915/i915_gem_stolen.c | 17 +++++-----
5 files changed, 33 insertions(+), 54 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
index c0242ce45e43..4a24831a14fa 100644
--- a/drivers/gpu/drm/i915/i915_dma.c
+++ b/drivers/gpu/drm/i915/i915_dma.c
@@ -989,6 +989,8 @@ int i915_driver_load(struct drm_device *dev, unsigned long flags)
dev_priv->gtt.mtrr = arch_phys_wc_add(dev_priv->gtt.mappable_base,
aperture_size);

+ i915_gem_init_global_gtt(dev);
+
/* The i915 workqueue is primarily used for batched retirement of
* requests (and thus managing bo) once the task has been completed
* by the GPU. i915_gem_retire_requests() is called directly when we
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index e4d7c7f5aca2..44bd514a6c2e 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4237,7 +4237,9 @@ int i915_gem_init(struct drm_device *dev)
if (ret)
goto out_unlock;

- i915_gem_init_global_gtt(dev);
+ ret = i915_global_gtt_setup(dev);
+ if (ret)
+ goto out_unlock;

ret = i915_gem_context_init(dev);
if (ret)
@@ -4312,7 +4314,6 @@ i915_gem_load(struct drm_device *dev)
SLAB_HWCACHE_ALIGN,
NULL);

- INIT_LIST_HEAD(&dev_priv->vm_list);
INIT_LIST_HEAD(&dev_priv->context_list);
INIT_LIST_HEAD(&dev_priv->mm.unbound_list);
INIT_LIST_HEAD(&dev_priv->mm.bound_list);
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 6168182a87d8..b5c3bbe6dc2a 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2681,10 +2681,7 @@ static void i915_gtt_color_adjust(struct drm_mm_node *node,
}
}

-static int i915_gem_setup_global_gtt(struct drm_device *dev,
- u64 start,
- u64 mappable_end,
- u64 end)
+int i915_global_gtt_setup(struct drm_device *dev)
{
/* Let GEM Manage all of the aperture.
*
@@ -2697,48 +2694,16 @@ static int i915_gem_setup_global_gtt(struct drm_device *dev,
*/
struct drm_i915_private *dev_priv = dev->dev_private;
struct i915_address_space *ggtt_vm = &dev_priv->gtt.base;
- struct drm_mm_node *entry;
- struct drm_i915_gem_object *obj;
unsigned long hole_start, hole_end;
+ struct drm_mm_node *entry;
int ret;

- BUG_ON(mappable_end > end);
-
- ggtt_vm->start = start;
-
- /* Subtract the guard page before address space initialization to
- * shrink the range used by drm_mm */
- ggtt_vm->total = end - start - PAGE_SIZE;
- i915_address_space_init(ggtt_vm, dev_priv);
- ggtt_vm->total += PAGE_SIZE;
-
if (intel_vgpu_active(dev)) {
ret = intel_vgt_balloon(dev);
if (ret)
return ret;
}

- if (!HAS_LLC(dev))
- ggtt_vm->mm.color_adjust = i915_gtt_color_adjust;
-
- /* Mark any preallocated objects as occupied */
- list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
- struct i915_vma *vma = i915_gem_obj_to_vma(obj, ggtt_vm);
-
- DRM_DEBUG_KMS("reserving preallocated space: %llx + %zx\n",
- i915_gem_obj_ggtt_offset(obj), obj->base.size);
-
- WARN_ON(i915_gem_obj_ggtt_bound(obj));
- ret = drm_mm_reserve_node(&ggtt_vm->mm, &vma->node);
- if (ret) {
- DRM_DEBUG_KMS("Reservation failed: %i\n", ret);
- return ret;
- }
- vma->bound |= GLOBAL_BIND;
- __i915_vma_set_map_and_fenceable(vma);
- list_add_tail(&vma->vm_link, &ggtt_vm->inactive_list);
- }
-
/* Clear any non-preallocated blocks */
drm_mm_for_each_hole(entry, &ggtt_vm->mm, hole_start, hole_end) {
DRM_DEBUG_KMS("clearing unused GTT space: [%lx, %lx]\n",
@@ -2748,7 +2713,9 @@ static int i915_gem_setup_global_gtt(struct drm_device *dev,
}

/* And finally clear the reserved guard page */
- ggtt_vm->clear_range(ggtt_vm, end - PAGE_SIZE, PAGE_SIZE, true);
+ ggtt_vm->clear_range(ggtt_vm,
+ ggtt_vm->total - PAGE_SIZE, PAGE_SIZE,
+ true);

if (USES_PPGTT(dev) && !USES_FULL_PPGTT(dev)) {
struct i915_hw_ppgtt *ppgtt;
@@ -2788,13 +2755,22 @@ static int i915_gem_setup_global_gtt(struct drm_device *dev,

void i915_gem_init_global_gtt(struct drm_device *dev)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
- u64 gtt_size, mappable_size;
+ struct drm_i915_private *dev_priv = to_i915(dev);
+ struct i915_address_space *ggtt_vm = &dev_priv->gtt.base;

- gtt_size = dev_priv->gtt.base.total;
- mappable_size = dev_priv->gtt.mappable_end;
+ INIT_LIST_HEAD(&dev_priv->vm_list);

- i915_gem_setup_global_gtt(dev, 0, mappable_size, gtt_size);
+ if (WARN_ON(dev_priv->gtt.mappable_end > ggtt_vm->total))
+ dev_priv->gtt.mappable_end = ggtt_vm->total;
+
+ if (!HAS_LLC(dev))
+ ggtt_vm->mm.color_adjust = i915_gtt_color_adjust;
+
+ /* Subtract the guard page before address space initialization to
+ * shrink the range used by drm_mm */
+ ggtt_vm->total -= PAGE_SIZE;
+ i915_address_space_init(ggtt_vm, dev_priv);
+ ggtt_vm->total += PAGE_SIZE;
}

void i915_global_gtt_cleanup(struct drm_device *dev)
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 2497671d1e1a..cb796c1ff6a5 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -514,6 +514,7 @@ i915_page_dir_dma_addr(const struct i915_hw_ppgtt *ppgtt, const unsigned n)

int i915_gem_gtt_init(struct drm_device *dev);
void i915_gem_init_global_gtt(struct drm_device *dev);
+int i915_global_gtt_setup(struct drm_device *dev);
void i915_global_gtt_cleanup(struct drm_device *dev);


diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
index 590e635cb65c..463be259a505 100644
--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
@@ -683,18 +683,17 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
*/
vma->node.start = gtt_offset;
vma->node.size = size;
- if (drm_mm_initialized(&ggtt->mm)) {
- ret = drm_mm_reserve_node(&ggtt->mm, &vma->node);
- if (ret) {
- DRM_DEBUG_KMS("failed to allocate stolen GTT space\n");
- goto err;
- }

- vma->bound |= GLOBAL_BIND;
- __i915_vma_set_map_and_fenceable(vma);
- list_add_tail(&vma->vm_link, &ggtt->inactive_list);
+ ret = drm_mm_reserve_node(&ggtt->mm, &vma->node);
+ if (ret) {
+ DRM_DEBUG_KMS("failed to allocate stolen GTT space\n");
+ goto err;
}

+ vma->bound |= GLOBAL_BIND;
+ __i915_vma_set_map_and_fenceable(vma);
+ list_add_tail(&vma->vm_link, &ggtt->inactive_list);
+
list_add_tail(&obj->global_list, &dev_priv->mm.bound_list);
i915_gem_object_pin_pages(obj);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:17 UTC
Permalink
If is simpler and leads to more readable code through the callstack if
the allocation returns the allocated struct through the return value.

The importance of this is that it no longer looks like we accidentally
allocate requests as side-effect of calling certain functions.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_drv.h | 3 +-
drivers/gpu/drm/i915/i915_gem.c | 82 ++++++++++--------------------
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 8 +--
drivers/gpu/drm/i915/i915_gem_request.c | 22 +++-----
drivers/gpu/drm/i915/i915_gem_request.h | 6 +--
drivers/gpu/drm/i915/i915_trace.h | 15 +++---
drivers/gpu/drm/i915/intel_display.c | 25 +++++----
drivers/gpu/drm/i915/intel_lrc.c | 6 +--
drivers/gpu/drm/i915/intel_overlay.c | 24 ++++-----
9 files changed, 77 insertions(+), 114 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 44e8738c5310..0c580124d46d 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2786,8 +2786,7 @@ static inline void i915_gem_object_unpin_vmap(struct drm_i915_gem_object *obj)

int __must_check i915_mutex_lock_interruptible(struct drm_device *dev);
int i915_gem_object_sync(struct drm_i915_gem_object *obj,
- struct intel_engine_cs *to,
- struct drm_i915_gem_request **to_req);
+ struct drm_i915_gem_request *to);
void i915_vma_move_to_active(struct i915_vma *vma,
struct drm_i915_gem_request *req);
int i915_gem_dumb_create(struct drm_file *file_priv,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 1c6beb154d07..5b5afdcd9634 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2550,47 +2550,35 @@ out:

static int
__i915_gem_object_sync(struct drm_i915_gem_object *obj,
- struct intel_engine_cs *to,
- struct drm_i915_gem_request *from_req,
- struct drm_i915_gem_request **to_req)
+ struct drm_i915_gem_request *to,
+ struct drm_i915_gem_request *from)
{
- struct intel_engine_cs *from;
int ret;

- from = from_req->engine;
- if (to == from)
+ if (to->engine == from->engine)
return 0;

- if (i915_gem_request_completed(from_req))
+ if (i915_gem_request_completed(from))
return 0;

if (!i915.semaphores) {
- struct drm_i915_private *i915 = from_req->i915;
- ret = __i915_wait_request(from_req,
- i915->mm.interruptible,
+ ret = __i915_wait_request(from,
+ to->i915->mm.interruptible,
NULL,
NO_WAITBOOST);
if (ret)
return ret;

- i915_gem_object_retire_request(obj, from_req);
+ i915_gem_object_retire_request(obj, from);
} else {
- int idx = intel_engine_sync_index(from, to);
- u32 seqno = i915_gem_request_get_seqno(from_req);
+ int idx = intel_engine_sync_index(from->engine, to->engine);
+ u32 seqno = i915_gem_request_get_seqno(from);

- WARN_ON(!to_req);
-
- if (seqno <= from->semaphore.sync_seqno[idx])
+ if (seqno <= from->engine->semaphore.sync_seqno[idx])
return 0;

- if (*to_req == NULL) {
- ret = i915_gem_request_alloc(to, to->default_context, to_req);
- if (ret)
- return ret;
- }
-
- trace_i915_gem_ring_sync_to(*to_req, from, from_req);
- ret = to->semaphore.sync_to(*to_req, from, seqno);
+ trace_i915_gem_ring_sync_to(to, from);
+ ret = to->engine->semaphore.sync_to(to, from->engine, seqno);
if (ret)
return ret;

@@ -2598,8 +2586,8 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
* might have just caused seqno wrap under
* the radar.
*/
- from->semaphore.sync_seqno[idx] =
- i915_gem_request_get_seqno(obj->last_read_req[from->id]);
+ from->engine->semaphore.sync_seqno[idx] =
+ i915_gem_request_get_seqno(obj->last_read_req[from->engine->id]);
}

return 0;
@@ -2609,17 +2597,12 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
* i915_gem_object_sync - sync an object to a ring.
*
* @obj: object which may be in use on another ring.
- * @to: ring we wish to use the object on. May be NULL.
- * @to_req: request we wish to use the object for. See below.
- * This will be allocated and returned if a request is
- * required but not passed in.
+ * @to: request we are wishing to use
*
* This code is meant to abstract object synchronization with the GPU.
- * Calling with NULL implies synchronizing the object with the CPU
- * rather than a particular GPU ring. Conceptually we serialise writes
- * between engines inside the GPU. We only allow one engine to write
- * into a buffer at any time, but multiple readers. To ensure each has
- * a coherent view of memory, we must:
+ * Conceptually we serialise writes between engines inside the GPU.
+ * We only allow one engine to write into a buffer at any time, but
+ * multiple readers. To ensure each has a coherent view of memory, we must:
*
* - If there is an outstanding write request to the object, the new
* request must wait for it to complete (either CPU or in hw, requests
@@ -2628,22 +2611,11 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
* - If we are a write request (pending_write_domain is set), the new
* request must wait for outstanding read requests to complete.
*
- * For CPU synchronisation (NULL to) no request is required. For syncing with
- * rings to_req must be non-NULL. However, a request does not have to be
- * pre-allocated. If *to_req is NULL and sync commands will be emitted then a
- * request will be allocated automatically and returned through *to_req. Note
- * that it is not guaranteed that commands will be emitted (because the system
- * might already be idle). Hence there is no need to create a request that
- * might never have any work submitted. Note further that if a request is
- * returned in *to_req, it is the responsibility of the caller to submit
- * that request (after potentially adding more work to it).
- *
* Returns 0 if successful, else propagates up the lower layer error.
*/
int
i915_gem_object_sync(struct drm_i915_gem_object *obj,
- struct intel_engine_cs *to,
- struct drm_i915_gem_request **to_req)
+ struct drm_i915_gem_request *to)
{
const bool readonly = obj->base.pending_write_domain == 0;
struct drm_i915_gem_request *req[I915_NUM_RINGS];
@@ -2652,9 +2624,6 @@ i915_gem_object_sync(struct drm_i915_gem_object *obj,
if (!obj->active)
return 0;

- if (to == NULL)
- return i915_gem_object_wait_rendering(obj, readonly);
-
n = 0;
if (readonly) {
if (obj->last_write_req)
@@ -2665,7 +2634,7 @@ i915_gem_object_sync(struct drm_i915_gem_object *obj,
req[n++] = obj->last_read_req[i];
}
for (i = 0; i < n; i++) {
- ret = __i915_gem_object_sync(obj, to, req[i], to_req);
+ ret = __i915_gem_object_sync(obj, to, req[i]);
if (ret)
return ret;
}
@@ -2783,9 +2752,9 @@ int i915_gpu_idle(struct drm_device *dev)
if (!i915.enable_execlists) {
struct drm_i915_gem_request *req;

- ret = i915_gem_request_alloc(ring, ring->default_context, &req);
- if (ret)
- return ret;
+ req = i915_gem_request_alloc(ring, ring->default_context);
+ if (IS_ERR(req))
+ return PTR_ERR(req);

ret = i915_switch_context(req);
i915_add_request_no_flush(req);
@@ -4263,8 +4232,9 @@ i915_gem_init_hw(struct drm_device *dev)

WARN_ON(!ring->default_context);

- ret = i915_gem_request_alloc(ring, ring->default_context, &req);
- if (ret) {
+ req = i915_gem_request_alloc(ring, ring->default_context);
+ if (IS_ERR(req)) {
+ ret = PTR_ERR(req);
i915_gem_cleanup_ringbuffer(dev);
goto out;
}
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index a56fae99a1bc..3956d74d8c8c 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -952,7 +952,7 @@ i915_gem_execbuffer_move_to_gpu(struct drm_i915_gem_request *req,
struct drm_i915_gem_object *obj = vma->obj;

if (obj->active & other_rings) {
- ret = i915_gem_object_sync(obj, req->engine, &req);
+ ret = i915_gem_object_sync(obj, req);
if (ret)
return ret;
}
@@ -1595,9 +1595,11 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
params->batch_obj_vm_offset = i915_gem_obj_offset(batch_obj, vm);

/* Allocate a request for this batch buffer nice and early. */
- ret = i915_gem_request_alloc(ring, ctx, &params->request);
- if (ret)
+ params->request = i915_gem_request_alloc(ring, ctx);
+ if (IS_ERR(params->request)) {
+ ret = PTR_ERR(params->request);
goto err_batch_unpin;
+ }

ret = i915_gem_request_add_to_client(params->request, file);
if (ret) {
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index e911430575fe..ce663acc9c7d 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -195,9 +195,9 @@ i915_gem_get_seqno(struct drm_i915_private *dev_priv, u32 *seqno)
return 0;
}

-int i915_gem_request_alloc(struct intel_engine_cs *engine,
- struct intel_context *ctx,
- struct drm_i915_gem_request **req_out)
+struct drm_i915_gem_request *
+i915_gem_request_alloc(struct intel_engine_cs *engine,
+ struct intel_context *ctx)
{
struct drm_i915_private *dev_priv = engine->i915;
unsigned reset_counter = i915_reset_counter(&dev_priv->gpu_error);
@@ -205,22 +205,17 @@ int i915_gem_request_alloc(struct intel_engine_cs *engine,
u32 seqno;
int ret;

- if (!req_out)
- return -EINVAL;
-
- *req_out = NULL;
-
/* ABI: Before userspace accesses the GPU (e.g. execbuffer), report
* EIO if the GPU is already wedged, or EAGAIN to drop the struct_mutex
* and restart.
*/
ret = i915_gem_check_wedge(reset_counter, dev_priv->mm.interruptible);
if (ret)
- return ret;
+ return ERR_PTR(ret);

req = kmem_cache_zalloc(dev_priv->requests, GFP_KERNEL);
if (req == NULL)
- return -ENOMEM;
+ return ERR_PTR(-ENOMEM);

ret = i915_gem_get_seqno(dev_priv, &seqno);
if (ret)
@@ -265,15 +260,14 @@ int i915_gem_request_alloc(struct intel_engine_cs *engine,
* free code.
*/
i915_gem_request_cancel(req);
- return ret;
+ return ERR_PTR(ret);
}

- *req_out = req;
- return 0;
+ return req;

err:
kmem_cache_free(dev_priv->requests, req);
- return ret;
+ return ERR_PTR(ret);
}

int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index 086950567db4..2da9e0b5dfc7 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -118,9 +118,9 @@ struct drm_i915_gem_request {
int elsp_submitted;
};

-int i915_gem_request_alloc(struct intel_engine_cs *ring,
- struct intel_context *ctx,
- struct drm_i915_gem_request **req_out);
+struct drm_i915_gem_request *
+i915_gem_request_alloc(struct intel_engine_cs *ring,
+ struct intel_context *ctx);
void i915_gem_request_cancel(struct drm_i915_gem_request *req);
int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
struct drm_file *file);
diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
index 95cab4776401..85469e3c740a 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -460,10 +460,9 @@ TRACE_EVENT(i915_gem_evict_vm,
);

TRACE_EVENT(i915_gem_ring_sync_to,
- TP_PROTO(struct drm_i915_gem_request *to_req,
- struct intel_engine_cs *from,
- struct drm_i915_gem_request *req),
- TP_ARGS(to_req, from, req),
+ TP_PROTO(struct drm_i915_gem_request *to,
+ struct drm_i915_gem_request *from),
+ TP_ARGS(to, from),

TP_STRUCT__entry(
__field(u32, dev)
@@ -473,10 +472,10 @@ TRACE_EVENT(i915_gem_ring_sync_to,
),

TP_fast_assign(
- __entry->dev = from->dev->primary->index;
- __entry->sync_from = from->id;
- __entry->sync_to = to_req->engine->id;
- __entry->seqno = i915_gem_request_get_seqno(req);
+ __entry->dev = from->i915->dev->primary->index;
+ __entry->sync_from = from->engine->id;
+ __entry->sync_to = to->engine->id;
+ __entry->seqno = from->fence.seqno;
),

TP_printk("dev=%u, sync-from=%u, sync-to=%u, seqno=%u",
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index f8717c5627dd..ec52fff7e0b0 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11669,15 +11669,21 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
* into the display plane and skip any waits.
*/
if (!mmio_flip) {
- ret = i915_gem_object_sync(obj, ring, &request);
- if (ret)
+ request = i915_gem_request_alloc(ring, ring->default_context);
+ if (IS_ERR(request)) {
+ ret = PTR_ERR(request);
goto cleanup_pending;
+ }
+
+ ret = i915_gem_object_sync(obj, request);
+ if (ret)
+ goto cleanup_request;
}

ret = intel_pin_and_fence_fb_obj(crtc->primary, fb,
crtc->primary->state);
if (ret)
- goto cleanup_pending;
+ goto cleanup_request;

work->gtt_offset = intel_plane_obj_offset(to_intel_plane(primary),
obj, 0);
@@ -11691,23 +11697,15 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,
i915_gem_request_assign(&work->flip_queued_req,
obj->last_write_req);
} else {
- if (!request) {
- ret = i915_gem_request_alloc(ring, ring->default_context, &request);
- if (ret)
- goto cleanup_unpin;
- }
-
ret = dev_priv->display.queue_flip(dev, crtc, fb, obj, request,
page_flip_flags);
if (ret)
goto cleanup_unpin;

+ i915_add_request_no_flush(request);
i915_gem_request_assign(&work->flip_queued_req, request);
}

- if (request)
- i915_add_request_no_flush(request);
-
work->flip_queued_vblank = drm_crtc_vblank_count(crtc);
work->enable_stall_check = true;

@@ -11725,9 +11723,10 @@ static int intel_crtc_page_flip(struct drm_crtc *crtc,

cleanup_unpin:
intel_unpin_fb_obj(fb, crtc->primary->state);
-cleanup_pending:
+cleanup_request:
if (request)
i915_add_request_no_flush(request);
+cleanup_pending:
atomic_dec(&intel_crtc->unpin_work_count);
mutex_unlock(&dev->struct_mutex);
cleanup:
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index b889680f7491..82b21a883732 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -629,7 +629,7 @@ static int execlists_move_to_gpu(struct drm_i915_gem_request *req,
struct drm_i915_gem_object *obj = vma->obj;

if (obj->active & other_rings) {
- ret = i915_gem_object_sync(obj, req->engine, &req);
+ ret = i915_gem_object_sync(obj, req);
if (ret)
return ret;
}
@@ -2264,8 +2264,8 @@ int intel_lr_context_deferred_alloc(struct intel_context *ctx,
if (ctx != engine->default_context && engine->init_context) {
struct drm_i915_gem_request *req;

- ret = i915_gem_request_alloc(engine, ctx, &req);
- if (ret) {
+ req = i915_gem_request_alloc(engine, ctx);
+ if (IS_ERR(req)) {
DRM_ERROR("ring create req: %d\n",
ret);
goto error_ringbuf;
diff --git a/drivers/gpu/drm/i915/intel_overlay.c b/drivers/gpu/drm/i915/intel_overlay.c
index cb73d16848b0..df71c01f28f1 100644
--- a/drivers/gpu/drm/i915/intel_overlay.c
+++ b/drivers/gpu/drm/i915/intel_overlay.c
@@ -240,9 +240,9 @@ static int intel_overlay_on(struct intel_overlay *overlay)
WARN_ON(overlay->active);
WARN_ON(IS_I830(dev) && !(dev_priv->quirks & QUIRK_PIPEA_FORCE));

- ret = i915_gem_request_alloc(ring, ring->default_context, &req);
- if (ret)
- return ret;
+ req = i915_gem_request_alloc(ring, ring->default_context);
+ if (IS_ERR(req))
+ return PTR_ERR(req);

ret = intel_ring_begin(req, 4);
if (ret) {
@@ -283,9 +283,9 @@ static int intel_overlay_continue(struct intel_overlay *overlay,
if (tmp & (1 << 17))
DRM_DEBUG("overlay underrun, DOVSTA: %x\n", tmp);

- ret = i915_gem_request_alloc(ring, ring->default_context, &req);
- if (ret)
- return ret;
+ req = i915_gem_request_alloc(ring, ring->default_context);
+ if (IS_ERR(req))
+ return PTR_ERR(req);

ret = intel_ring_begin(req, 2);
if (ret) {
@@ -349,9 +349,9 @@ static int intel_overlay_off(struct intel_overlay *overlay)
* of the hw. Do it in both cases */
flip_addr |= OFC_UPDATE;

- ret = i915_gem_request_alloc(ring, ring->default_context, &req);
- if (ret)
- return ret;
+ req = i915_gem_request_alloc(ring, ring->default_context);
+ if (IS_ERR(req))
+ return PTR_ERR(req);

ret = intel_ring_begin(req, 6);
if (ret) {
@@ -423,9 +423,9 @@ static int intel_overlay_release_old_vid(struct intel_overlay *overlay)
/* synchronous slowpath */
struct drm_i915_gem_request *req;

- ret = i915_gem_request_alloc(ring, ring->default_context, &req);
- if (ret)
- return ret;
+ req = i915_gem_request_alloc(ring, ring->default_context);
+ if (req)
+ return PTR_ERR(req);

ret = intel_ring_begin(req, 2);
if (ret) {
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:55 UTC
Permalink
Migrate the request operations out of the main body of i915_gem.c and
into their own C file for easier expansion.

v2: Move __i915_add_request() across as well

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/Makefile | 1 +
drivers/gpu/drm/i915/i915_drv.h | 205 +---------
drivers/gpu/drm/i915/i915_gem.c | 652 +------------------------------
drivers/gpu/drm/i915/i915_gem_request.c | 659 ++++++++++++++++++++++++++++++++
drivers/gpu/drm/i915/i915_gem_request.h | 223 +++++++++++
5 files changed, 895 insertions(+), 845 deletions(-)
create mode 100644 drivers/gpu/drm/i915/i915_gem_request.c
create mode 100644 drivers/gpu/drm/i915/i915_gem_request.h

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 99ce591c8574..b0a83215db80 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -31,6 +31,7 @@ i915-y += i915_cmd_parser.o \
i915_gem_gtt.o \
i915_gem.o \
i915_gem_render_state.o \
+ i915_gem_request.o \
i915_gem_shrinker.o \
i915_gem_stolen.o \
i915_gem_tiling.o \
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 57e450e25ad6..ee146ce02412 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -41,6 +41,7 @@
#include "intel_lrc.h"
#include "i915_gem_gtt.h"
#include "i915_gem_render_state.h"
+#include "i915_gem_request.h"
#include <linux/io-mapping.h>
#include <linux/i2c.h>
#include <linux/i2c-algo-bit.h>
@@ -2162,179 +2163,15 @@ struct drm_i915_gem_object {
};
#define to_intel_bo(x) container_of(x, struct drm_i915_gem_object, base)

-void i915_gem_track_fb(struct drm_i915_gem_object *old,
- struct drm_i915_gem_object *new,
- unsigned frontbuffer_bits);
-
-/**
- * Request queue structure.
- *
- * The request queue allows us to note sequence numbers that have been emitted
- * and may be associated with active buffers to be retired.
- *
- * By keeping this list, we can avoid having to do questionable sequence
- * number comparisons on buffer last_read|write_seqno. It also allows an
- * emission time to be associated with the request for tracking how far ahead
- * of the GPU the submission is.
- *
- * The requests are reference counted, so upon creation they should have an
- * initial reference taken using kref_init
- */
-struct drm_i915_gem_request {
- struct kref ref;
-
- /** On Which ring this request was generated */
- struct drm_i915_private *i915;
- struct intel_engine_cs *ring;
- unsigned reset_counter;
-
- /** GEM sequence number associated with the previous request,
- * when the HWS breadcrumb is equal to this the GPU is processing
- * this request.
- */
- u32 previous_seqno;
-
- /** GEM sequence number associated with this request,
- * when the HWS breadcrumb is equal or greater than this the GPU
- * has finished processing this request.
- */
- u32 seqno;
-
- /** Position in the ringbuffer of the start of the request */
- u32 head;
-
- /**
- * Position in the ringbuffer of the start of the postfix.
- * This is required to calculate the maximum available ringbuffer
- * space without overwriting the postfix.
- */
- u32 postfix;
-
- /** Position in the ringbuffer of the end of the whole request */
- u32 tail;
-
- /**
- * Context and ring buffer related to this request
- * Contexts are refcounted, so when this request is associated with a
- * context, we must increment the context's refcount, to guarantee that
- * it persists while any request is linked to it. Requests themselves
- * are also refcounted, so the request will only be freed when the last
- * reference to it is dismissed, and the code in
- * i915_gem_request_free() will then decrement the refcount on the
- * context.
- */
- struct intel_context *ctx;
- struct intel_ringbuffer *ringbuf;
-
- /** Batch buffer related to this request if any (used for
- error state dump only) */
- struct drm_i915_gem_object *batch_obj;
-
- /** Time at which this request was emitted, in jiffies. */
- unsigned long emitted_jiffies;
-
- /** global list entry for this request */
- struct list_head list;
-
- struct drm_i915_file_private *file_priv;
- /** file_priv list entry for this request */
- struct list_head client_list;
-
- /** process identifier submitting this request */
- struct pid *pid;
-
- /**
- * The ELSP only accepts two elements at a time, so we queue
- * context/tail pairs on a given queue (ring->execlist_queue) until the
- * hardware is available. The queue serves a double purpose: we also use
- * it to keep track of the up to 2 contexts currently in the hardware
- * (usually one in execution and the other queued up by the GPU): We
- * only remove elements from the head of the queue when the hardware
- * informs us that an element has been completed.
- *
- * All accesses to the queue are mediated by a spinlock
- * (ring->execlist_lock).
- */
-
- /** Execlist link in the submission queue.*/
- struct list_head execlist_link;
-
- /** Execlists no. of times this request has been sent to the ELSP */
- int elsp_submitted;
-
-};
-
#ifdef CONFIG_DRM_I915_DEBUG_GEM
#define GEM_BUG_ON(expr) BUG_ON(expr)
#else
#define GEM_BUG_ON(expr)
#endif

-int i915_gem_request_alloc(struct intel_engine_cs *ring,
- struct intel_context *ctx,
- struct drm_i915_gem_request **req_out);
-void i915_gem_request_cancel(struct drm_i915_gem_request *req);
-void i915_gem_request_free(struct kref *req_ref);
-int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
- struct drm_file *file);
-
-static inline uint32_t
-i915_gem_request_get_seqno(struct drm_i915_gem_request *req)
-{
- return req ? req->seqno : 0;
-}
-
-static inline struct intel_engine_cs *
-i915_gem_request_get_ring(struct drm_i915_gem_request *req)
-{
- return req ? req->ring : NULL;
-}
-
-static inline struct drm_i915_gem_request *
-i915_gem_request_reference(struct drm_i915_gem_request *req)
-{
- if (req)
- kref_get(&req->ref);
- return req;
-}
-
-static inline void
-i915_gem_request_unreference(struct drm_i915_gem_request *req)
-{
- WARN_ON(!mutex_is_locked(&req->ring->dev->struct_mutex));
- kref_put(&req->ref, i915_gem_request_free);
-}
-
-static inline void
-i915_gem_request_unreference__unlocked(struct drm_i915_gem_request *req)
-{
- struct drm_device *dev;
-
- if (!req)
- return;
-
- dev = req->ring->dev;
- if (kref_put_mutex(&req->ref, i915_gem_request_free, &dev->struct_mutex))
- mutex_unlock(&dev->struct_mutex);
-}
-
-static inline void i915_gem_request_assign(struct drm_i915_gem_request **pdst,
- struct drm_i915_gem_request *src)
-{
- if (src)
- i915_gem_request_reference(src);
-
- if (*pdst)
- i915_gem_request_unreference(*pdst);
-
- *pdst = src;
-}
-
-/*
- * XXX: i915_gem_request_completed should be here but currently needs the
- * definition of i915_seqno_passed() which is below. It will be moved in
- * a later patch when the call to i915_seqno_passed() is obsoleted...
- */
+void i915_gem_track_fb(struct drm_i915_gem_object *old,
+ struct drm_i915_gem_object *new,
+ unsigned frontbuffer_bits);

/*
* A command that requires special handling by the command parser.
@@ -2956,28 +2793,6 @@ int i915_gem_dumb_create(struct drm_file *file_priv,
struct drm_mode_create_dumb *args);
int i915_gem_mmap_gtt(struct drm_file *file_priv, struct drm_device *dev,
uint32_t handle, uint64_t *offset);
-/**
- * Returns true if seq1 is later than seq2.
- */
-static inline bool
-i915_seqno_passed(uint32_t seq1, uint32_t seq2)
-{
- return (int32_t)(seq1 - seq2) >= 0;
-}
-
-static inline bool i915_gem_request_started(struct drm_i915_gem_request *req)
-{
- return i915_seqno_passed(intel_ring_get_seqno(req->ring),
- req->previous_seqno);
-}
-
-static inline bool i915_gem_request_completed(struct drm_i915_gem_request *req)
-{
- return i915_seqno_passed(intel_ring_get_seqno(req->ring),
- req->seqno);
-}
-
-int __must_check i915_gem_get_seqno(struct drm_device *dev, u32 *seqno);
int __must_check i915_gem_set_seqno(struct drm_device *dev, u32 seqno);

struct drm_i915_gem_request *
@@ -3036,18 +2851,6 @@ void i915_gem_init_swizzling(struct drm_device *dev);
void i915_gem_cleanup_ringbuffer(struct drm_device *dev);
int __must_check i915_gpu_idle(struct drm_device *dev);
int __must_check i915_gem_suspend(struct drm_device *dev);
-void __i915_add_request(struct drm_i915_gem_request *req,
- struct drm_i915_gem_object *batch_obj,
- bool flush_caches);
-#define i915_add_request(req) \
- __i915_add_request(req, NULL, true)
-#define i915_add_request_no_flush(req) \
- __i915_add_request(req, NULL, false)
-int __i915_wait_request(struct drm_i915_gem_request *req,
- bool interruptible,
- s64 *timeout,
- struct intel_rps_client *rps);
-int __must_check i915_wait_request(struct drm_i915_gem_request *req);
int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf);
int __must_check
i915_gem_object_wait_rendering(struct drm_i915_gem_object *obj,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index ea9344503bf6..68a25617ca7a 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1103,365 +1103,6 @@ put_rpm:
return ret;
}

-static int
-i915_gem_check_wedge(unsigned reset_counter, bool interruptible)
-{
- if (__i915_terminally_wedged(reset_counter))
- return -EIO;
-
- if (__i915_reset_in_progress(reset_counter)) {
- /* Non-interruptible callers can't handle -EAGAIN, hence return
- * -EIO unconditionally for these. */
- if (!interruptible)
- return -EIO;
-
- return -EAGAIN;
- }
-
- return 0;
-}
-
-static unsigned long local_clock_us(unsigned *cpu)
-{
- unsigned long t;
-
- /* Cheaply and approximately convert from nanoseconds to microseconds.
- * The result and subsequent calculations are also defined in the same
- * approximate microseconds units. The principal source of timing
- * error here is from the simple truncation.
- *
- * Note that local_clock() is only defined wrt to the current CPU;
- * the comparisons are no longer valid if we switch CPUs. Instead of
- * blocking preemption for the entire busywait, we can detect the CPU
- * switch and use that as indicator of system load and a reason to
- * stop busywaiting, see busywait_stop().
- */
- *cpu = get_cpu();
- t = local_clock() >> 10;
- put_cpu();
-
- return t;
-}
-
-static bool busywait_stop(unsigned long timeout, unsigned cpu)
-{
- unsigned this_cpu;
-
- if (time_after(local_clock_us(&this_cpu), timeout))
- return true;
-
- return this_cpu != cpu;
-}
-
-static bool __i915_spin_request(struct drm_i915_gem_request *req,
- struct intel_wait *wait,
- int state)
-{
- unsigned long timeout;
- unsigned cpu;
-
- /* When waiting for high frequency requests, e.g. during synchronous
- * rendering split between the CPU and GPU, the finite amount of time
- * required to set up the irq and wait upon it limits the response
- * rate. By busywaiting on the request completion for a short while we
- * can service the high frequency waits as quick as possible. However,
- * if it is a slow request, we want to sleep as quickly as possible.
- * The tradeoff between waiting and sleeping is roughly the time it
- * takes to sleep on a request, on the order of a microsecond.
- */
-
- /* Only spin if we know the GPU is processing this request */
- if (!i915_gem_request_started(req))
- return false;
-
- timeout = local_clock_us(&cpu) + 5;
- do {
- if (i915_gem_request_completed(req))
- return true;
-
- if (signal_pending_state(state, wait->task))
- break;
-
- if (busywait_stop(timeout, cpu))
- break;
-
- cpu_relax_lowlatency();
-
- /* Break the loop if we have consumed the timeslice (or been
- * preempted) or when either the background thread has
- * enabled the interrupt, or the IRQ itself has fired.
- */
- } while (!need_resched() && wait->task->state == state);
-
- return false;
-}
-
-/**
- * __i915_wait_request - wait until execution of request has finished
- * @req: duh!
- * @interruptible: do an interruptible wait (normally yes)
- * @timeout: in - how long to wait (NULL forever); out - how much time remaining
- *
- * Note: It is of utmost importance that the passed in seqno and reset_counter
- * values have been read by the caller in an smp safe manner. Where read-side
- * locks are involved, it is sufficient to read the reset_counter before
- * unlocking the lock that protects the seqno. For lockless tricks, the
- * reset_counter _must_ be read before, and an appropriate smp_rmb must be
- * inserted.
- *
- * Returns 0 if the request was found within the alloted time. Else returns the
- * errno with remaining time filled in timeout argument.
- */
-int __i915_wait_request(struct drm_i915_gem_request *req,
- bool interruptible,
- s64 *timeout,
- struct intel_rps_client *rps)
-{
- int state = interruptible ? TASK_INTERRUPTIBLE : TASK_UNINTERRUPTIBLE;
- struct intel_wait wait;
- unsigned long timeout_remain;
- int ret = 0;
-
- might_sleep();
-
- if (list_empty(&req->list))
- return 0;
-
- if (i915_gem_request_completed(req))
- return 0;
-
- timeout_remain = MAX_SCHEDULE_TIMEOUT;
- if (timeout) {
- if (WARN_ON(*timeout < 0))
- return -EINVAL;
-
- if (*timeout == 0)
- return -ETIME;
-
- /* Record current time in case interrupted, or wedged */
- timeout_remain = nsecs_to_jiffies_timeout(*timeout);
- *timeout += ktime_get_raw_ns();
- }
-
- trace_i915_gem_request_wait_begin(req);
-
- /* This client is about to stall waiting for the GPU. In many cases
- * this is undesirable and limits the throughput of the system, as
- * many clients cannot continue processing user input/output whilst
- * blocked. RPS autotuning may take tens of milliseconds to respond
- * to the GPU load and thus incurs additional latency for the client.
- * We can circumvent that by promoting the GPU frequency to maximum
- * before we wait. This makes the GPU throttle up much more quickly
- * (good for benchmarks and user experience, e.g. window animations),
- * but at a cost of spending more power processing the workload
- * (bad for battery). Not all clients even want their results
- * immediately and for them we should just let the GPU select its own
- * frequency to maximise efficiency. To prevent a single client from
- * forcing the clocks too high for the whole system, we only allow
- * each client to waitboost once in a busy period.
- */
- if (INTEL_INFO(req->i915)->gen >= 6)
- gen6_rps_boost(req->i915, rps, req->emitted_jiffies);
-
- intel_wait_init(&wait, req->seqno);
- set_task_state(wait.task, state);
-
- /* Optimistic spin for the next ~jiffie before touching IRQs */
- if (intel_engine_add_wait(req->ring, &wait)) {
- if (__i915_spin_request(req, &wait, state))
- goto complete;
-
- /* In order to check that we haven't missed the interrupt
- * as we enabled it, we need to kick ourselves to do a
- * coherent check on the seqno before we sleep.
- */
- if (intel_engine_enable_wait_irq(req->ring, &wait))
- goto wakeup;
- }
-
- for (;;) {
- if (signal_pending_state(state, wait.task)) {
- ret = -ERESTARTSYS;
- break;
- }
-
- /* Ensure that even if the GPU hangs, we get woken up. */
- i915_queue_hangcheck(req->i915);
-
- timeout_remain = io_schedule_timeout(timeout_remain);
- if (timeout_remain == 0) {
- ret = -ETIME;
- break;
- }
-
- if (intel_wait_complete(&wait))
- break;
-
-wakeup:
- set_task_state(wait.task, state);
-
- /* Carefully check if the request is complete, giving time
- * for the seqno to be visible following the interrupt.
- * We also have to check in case we are kicked by the GPU
- * reset in order to drop the struct_mutex.
- */
- if (__i915_request_irq_complete(req))
- break;
- }
-
-complete:
- intel_engine_remove_wait(req->ring, &wait);
- __set_task_state(wait.task, TASK_RUNNING);
- trace_i915_gem_request_wait_end(req);
-
- if (timeout) {
- *timeout -= ktime_get_raw_ns();
- if (*timeout < 0)
- *timeout = 0;
-
- /*
- * Apparently ktime isn't accurate enough and occasionally has a
- * bit of mismatch in the jiffies<->nsecs<->ktime loop. So patch
- * things up to make the test happy. We allow up to 1 jiffy.
- *
- * This is a regrssion from the timespec->ktime conversion.
- */
- if (ret == -ETIME && *timeout < jiffies_to_usecs(1)*1000)
- *timeout = 0;
- }
-
- if (ret == 0 && rps && req->seqno == req->ring->last_submitted_seqno) {
- /* The GPU is now idle and this client has stalled.
- * Since no other client has submitted a request in the
- * meantime, assume that this client is the only one
- * supplying work to the GPU but is unable to keep that
- * work supplied because it is waiting. Since the GPU is
- * then never kept fully busy, RPS autoclocking will
- * keep the clocks relatively low, causing further delays.
- * Compensate by giving the synchronous client credit for
- * a waitboost next time.
- */
- spin_lock(&req->i915->rps.client_lock);
- list_del_init(&rps->link);
- spin_unlock(&req->i915->rps.client_lock);
- }
-
- return ret;
-}
-
-int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
- struct drm_file *file)
-{
- struct drm_i915_private *dev_private;
- struct drm_i915_file_private *file_priv;
-
- WARN_ON(!req || !file || req->file_priv);
-
- if (!req || !file)
- return -EINVAL;
-
- if (req->file_priv)
- return -EINVAL;
-
- dev_private = req->ring->dev->dev_private;
- file_priv = file->driver_priv;
-
- spin_lock(&file_priv->mm.lock);
- req->file_priv = file_priv;
- list_add_tail(&req->client_list, &file_priv->mm.request_list);
- spin_unlock(&file_priv->mm.lock);
-
- req->pid = get_pid(task_pid(current));
-
- return 0;
-}
-
-static inline void
-i915_gem_request_remove_from_client(struct drm_i915_gem_request *request)
-{
- struct drm_i915_file_private *file_priv = request->file_priv;
-
- if (!file_priv)
- return;
-
- spin_lock(&file_priv->mm.lock);
- list_del(&request->client_list);
- request->file_priv = NULL;
- spin_unlock(&file_priv->mm.lock);
-
- put_pid(request->pid);
- request->pid = NULL;
-}
-
-static void i915_gem_request_retire(struct drm_i915_gem_request *request)
-{
- trace_i915_gem_request_retire(request);
-
- /* We know the GPU must have read the request to have
- * sent us the seqno + interrupt, so use the position
- * of tail of the request to update the last known position
- * of the GPU head.
- *
- * Note this requires that we are always called in request
- * completion order.
- */
- request->ringbuf->last_retired_head = request->postfix;
-
- list_del_init(&request->list);
- i915_gem_request_remove_from_client(request);
-
- i915_gem_request_unreference(request);
-}
-
-static void
-__i915_gem_request_retire__upto(struct drm_i915_gem_request *req)
-{
- struct intel_engine_cs *engine = req->ring;
- struct drm_i915_gem_request *tmp;
-
- lockdep_assert_held(&engine->dev->struct_mutex);
-
- if (list_empty(&req->list))
- return;
-
- do {
- tmp = list_first_entry(&engine->request_list,
- typeof(*tmp), list);
-
- i915_gem_request_retire(tmp);
- } while (tmp != req);
-
- WARN_ON(i915_verify_lists(engine->dev));
-}
-
-/**
- * Waits for a request to be signaled, and cleans up the
- * request and object lists appropriately for that event.
- */
-int
-i915_wait_request(struct drm_i915_gem_request *req)
-{
- struct drm_device *dev;
- struct drm_i915_private *dev_priv;
- bool interruptible;
- int ret;
-
- BUG_ON(req == NULL);
-
- dev = req->ring->dev;
- dev_priv = dev->dev_private;
- interruptible = dev_priv->mm.interruptible;
-
- BUG_ON(!mutex_is_locked(&dev->struct_mutex));
-
- ret = __i915_wait_request(req, interruptible, NULL, NULL);
- if (ret)
- return ret;
-
- __i915_gem_request_retire__upto(req);
- return 0;
-}
-
/**
* Ensures that all rendering to the object has completed and the object is
* safe to unbind from the GTT or access from the CPU.
@@ -1515,7 +1156,7 @@ i915_gem_object_retire_request(struct drm_i915_gem_object *obj,
else if (obj->last_write_req == req)
i915_gem_object_retire__write(obj);

- __i915_gem_request_retire__upto(req);
+ i915_gem_request_retire_upto(req);
}

/* A nonblocking variant of the above wait. This is a highly dangerous routine
@@ -2441,94 +2082,6 @@ i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring)
drm_gem_object_unreference(&obj->base);
}

-static int
-i915_gem_init_seqno(struct drm_device *dev, u32 seqno)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_engine_cs *ring;
- int ret, i, j;
-
- /* Carefully retire all requests without writing to the rings */
- for_each_ring(ring, dev_priv, i) {
- ret = intel_ring_idle(ring);
- if (ret)
- return ret;
- }
- i915_gem_retire_requests(dev);
-
- /* Finally reset hw state */
- for_each_ring(ring, dev_priv, i) {
- intel_ring_init_seqno(ring, seqno);
-
- for (j = 0; j < ARRAY_SIZE(ring->semaphore.sync_seqno); j++)
- ring->semaphore.sync_seqno[j] = 0;
- }
-
- return 0;
-}
-
-int i915_gem_set_seqno(struct drm_device *dev, u32 seqno)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
- int ret;
-
- if (seqno == 0)
- return -EINVAL;
-
- /* HWS page needs to be set less than what we
- * will inject to ring
- */
- ret = i915_gem_init_seqno(dev, seqno - 1);
- if (ret)
- return ret;
-
- /* Carefully set the last_seqno value so that wrap
- * detection still works
- */
- dev_priv->next_seqno = seqno;
- dev_priv->last_seqno = seqno - 1;
- if (dev_priv->last_seqno == 0)
- dev_priv->last_seqno--;
-
- return 0;
-}
-
-int
-i915_gem_get_seqno(struct drm_device *dev, u32 *seqno)
-{
- struct drm_i915_private *dev_priv = dev->dev_private;
-
- /* reserve 0 for non-seqno */
- if (dev_priv->next_seqno == 0) {
- int ret = i915_gem_init_seqno(dev, 0);
- if (ret)
- return ret;
-
- dev_priv->next_seqno = 1;
- }
-
- *seqno = dev_priv->last_seqno = dev_priv->next_seqno++;
- return 0;
-}
-
-static void i915_gem_mark_busy(struct drm_i915_private *dev_priv)
-{
- if (dev_priv->mm.busy)
- return;
-
- intel_runtime_pm_get_noresume(dev_priv);
-
- i915_update_gfx_val(dev_priv);
- if (INTEL_INFO(dev_priv)->gen >= 6)
- gen6_rps_busy(dev_priv);
-
- queue_delayed_work(dev_priv->wq,
- &dev_priv->mm.retire_work,
- round_jiffies_up_relative(HZ));
-
- dev_priv->mm.busy = true;
-}
-
static void i915_gem_mark_idle(struct drm_i915_private *dev_priv)
{
dev_priv->mm.busy = false;
@@ -2542,92 +2095,6 @@ static void i915_gem_mark_idle(struct drm_i915_private *dev_priv)
intel_runtime_pm_put(dev_priv);
}

-/*
- * NB: This function is not allowed to fail. Doing so would mean the the
- * request is not being tracked for completion but the work itself is
- * going to happen on the hardware. This would be a Bad Thing(tm).
- */
-void __i915_add_request(struct drm_i915_gem_request *request,
- struct drm_i915_gem_object *obj,
- bool flush_caches)
-{
- struct intel_engine_cs *ring;
- struct drm_i915_private *dev_priv;
- struct intel_ringbuffer *ringbuf;
- u32 request_start;
- int ret;
-
- if (WARN_ON(request == NULL))
- return;
-
- ring = request->ring;
- dev_priv = ring->dev->dev_private;
- ringbuf = request->ringbuf;
-
- /*
- * To ensure that this call will not fail, space for its emissions
- * should already have been reserved in the ring buffer. Let the ring
- * know that it is time to use that space up.
- */
- intel_ring_reserved_space_use(ringbuf);
-
- request_start = intel_ring_get_tail(ringbuf);
- /*
- * Emit any outstanding flushes - execbuf can fail to emit the flush
- * after having emitted the batchbuffer command. Hence we need to fix
- * things up similar to emitting the lazy request. The difference here
- * is that the flush _must_ happen before the next request, no matter
- * what.
- */
- if (flush_caches) {
- if (i915.enable_execlists)
- ret = logical_ring_flush_all_caches(request);
- else
- ret = intel_ring_flush_all_caches(request);
- /* Not allowed to fail! */
- WARN(ret, "*_ring_flush_all_caches failed: %d!\n", ret);
- }
-
- /* Record the position of the start of the request so that
- * should we detect the updated seqno part-way through the
- * GPU processing the request, we never over-estimate the
- * position of the head.
- */
- request->postfix = intel_ring_get_tail(ringbuf);
-
- if (i915.enable_execlists)
- ret = ring->emit_request(request);
- else {
- ret = ring->add_request(request);
-
- request->tail = intel_ring_get_tail(ringbuf);
- }
- /* Not allowed to fail! */
- WARN(ret, "emit|add_request failed: %d!\n", ret);
-
- request->head = request_start;
-
- /* Whilst this request exists, batch_obj will be on the
- * active_list, and so will hold the active reference. Only when this
- * request is retired will the the batch_obj be moved onto the
- * inactive_list and lose its active reference. Hence we do not need
- * to explicitly hold another reference here.
- */
- request->batch_obj = obj;
-
- request->emitted_jiffies = jiffies;
- request->previous_seqno = ring->last_submitted_seqno;
- ring->last_submitted_seqno = request->seqno;
- list_add_tail(&request->list, &ring->request_list);
-
- trace_i915_gem_request_add(request);
-
- i915_gem_mark_busy(dev_priv);
-
- /* Sanity check that the reserved size was large enough. */
- intel_ring_reserved_space_end(ringbuf);
-}
-
static bool i915_context_is_banned(struct drm_i915_private *dev_priv,
const struct intel_context *ctx)
{
@@ -2666,109 +2133,6 @@ static void i915_set_reset_status(struct drm_i915_private *dev_priv,
}
}

-void i915_gem_request_free(struct kref *req_ref)
-{
- struct drm_i915_gem_request *req = container_of(req_ref,
- typeof(*req), ref);
- struct intel_context *ctx = req->ctx;
-
- if (req->file_priv)
- i915_gem_request_remove_from_client(req);
-
- if (ctx) {
- if (i915.enable_execlists) {
- if (ctx != req->ring->default_context)
- intel_lr_context_unpin(req);
- }
-
- i915_gem_context_unreference(ctx);
- }
-
- kmem_cache_free(req->i915->requests, req);
-}
-
-int i915_gem_request_alloc(struct intel_engine_cs *ring,
- struct intel_context *ctx,
- struct drm_i915_gem_request **req_out)
-{
- struct drm_i915_private *dev_priv = to_i915(ring->dev);
- unsigned reset_counter = i915_reset_counter(&dev_priv->gpu_error);
- struct drm_i915_gem_request *req;
- int ret;
-
- if (!req_out)
- return -EINVAL;
-
- *req_out = NULL;
-
- /* ABI: Before userspace accesses the GPU (e.g. execbuffer), report
- * EIO if the GPU is already wedged, or EAGAIN to drop the struct_mutex
- * and restart.
- */
- ret = i915_gem_check_wedge(reset_counter, dev_priv->mm.interruptible);
- if (ret)
- return ret;
-
- req = kmem_cache_zalloc(dev_priv->requests, GFP_KERNEL);
- if (req == NULL)
- return -ENOMEM;
-
- ret = i915_gem_get_seqno(ring->dev, &req->seqno);
- if (ret)
- goto err;
-
- kref_init(&req->ref);
- req->i915 = dev_priv;
- req->ring = ring;
- req->reset_counter = reset_counter;
- req->ctx = ctx;
- i915_gem_context_reference(req->ctx);
-
- if (i915.enable_execlists)
- ret = intel_logical_ring_alloc_request_extras(req);
- else
- ret = intel_ring_alloc_request_extras(req);
- if (ret) {
- i915_gem_context_unreference(req->ctx);
- goto err;
- }
-
- /*
- * Reserve space in the ring buffer for all the commands required to
- * eventually emit this request. This is to guarantee that the
- * i915_add_request() call can't fail. Note that the reserve may need
- * to be redone if the request is not actually submitted straight
- * away, e.g. because a GPU scheduler has deferred it.
- */
- if (i915.enable_execlists)
- ret = intel_logical_ring_reserve_space(req);
- else
- ret = intel_ring_reserve_space(req);
- if (ret) {
- /*
- * At this point, the request is fully allocated even if not
- * fully prepared. Thus it can be cleaned up using the proper
- * free code.
- */
- i915_gem_request_cancel(req);
- return ret;
- }
-
- *req_out = req;
- return 0;
-
-err:
- kmem_cache_free(dev_priv->requests, req);
- return ret;
-}
-
-void i915_gem_request_cancel(struct drm_i915_gem_request *req)
-{
- intel_ring_reserved_space_cancel(req->ringbuf);
-
- i915_gem_request_unreference(req);
-}
-
struct drm_i915_gem_request *
i915_gem_find_active_request(struct intel_engine_cs *ring)
{
@@ -2850,14 +2214,14 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
* implicit references on things like e.g. ppgtt address spaces through
* the request.
*/
- while (!list_empty(&ring->request_list)) {
+ if (!list_empty(&ring->request_list)) {
struct drm_i915_gem_request *request;

- request = list_first_entry(&ring->request_list,
- struct drm_i915_gem_request,
- list);
+ request = list_last_entry(&ring->request_list,
+ struct drm_i915_gem_request,
+ list);

- i915_gem_request_retire(request);
+ i915_gem_request_retire_upto(request);
}

/* Having flushed all requests from all queues, we know that all
@@ -2922,7 +2286,7 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
if (!i915_gem_request_completed(request))
break;

- i915_gem_request_retire(request);
+ i915_gem_request_retire_upto(request);
}

/* Move any buffers on the active list that are no longer referenced
@@ -3053,7 +2417,7 @@ i915_gem_object_flush_active(struct drm_i915_gem_object *obj)
goto retire;

if (i915_gem_request_completed(req)) {
- __i915_gem_request_retire__upto(req);
+ i915_gem_request_retire_upto(req);
retire:
i915_gem_object_retire__read(obj, i);
}
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
new file mode 100644
index 000000000000..b4ede6dd7b20
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -0,0 +1,659 @@
+/*
+ * Copyright © 2008-2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "i915_drv.h"
+
+static int
+i915_gem_check_wedge(unsigned reset_counter, bool interruptible)
+{
+ if (__i915_terminally_wedged(reset_counter))
+ return -EIO;
+
+ if (__i915_reset_in_progress(reset_counter)) {
+ /* Non-interruptible callers can't handle -EAGAIN, hence return
+ * -EIO unconditionally for these. */
+ if (!interruptible)
+ return -EIO;
+
+ return -EAGAIN;
+ }
+
+ return 0;
+}
+
+static int
+i915_gem_init_seqno(struct drm_i915_private *dev_priv, u32 seqno)
+{
+ struct intel_engine_cs *ring;
+ int ret, i, j;
+
+ /* Carefully retire all requests without writing to the rings */
+ for_each_ring(ring, dev_priv, i) {
+ ret = intel_ring_idle(ring);
+ if (ret)
+ return ret;
+ }
+ i915_gem_retire_requests(dev_priv->dev);
+
+ /* Finally reset hw state */
+ for_each_ring(ring, dev_priv, i) {
+ intel_ring_init_seqno(ring, seqno);
+
+ for (j = 0; j < ARRAY_SIZE(ring->semaphore.sync_seqno); j++)
+ ring->semaphore.sync_seqno[j] = 0;
+ }
+
+ return 0;
+}
+
+int i915_gem_set_seqno(struct drm_device *dev, u32 seqno)
+{
+ struct drm_i915_private *dev_priv = dev->dev_private;
+ int ret;
+
+ if (seqno == 0)
+ return -EINVAL;
+
+ /* HWS page needs to be set less than what we
+ * will inject to ring
+ */
+ ret = i915_gem_init_seqno(dev_priv, seqno - 1);
+ if (ret)
+ return ret;
+
+ /* Carefully set the last_seqno value so that wrap
+ * detection still works
+ */
+ dev_priv->next_seqno = seqno;
+ dev_priv->last_seqno = seqno - 1;
+ if (dev_priv->last_seqno == 0)
+ dev_priv->last_seqno--;
+
+ return 0;
+}
+
+static int
+i915_gem_get_seqno(struct drm_i915_private *dev_priv, u32 *seqno)
+{
+ /* reserve 0 for non-seqno */
+ if (unlikely(dev_priv->next_seqno == 0)) {
+ int ret = i915_gem_init_seqno(dev_priv, 0);
+ if (ret)
+ return ret;
+
+ dev_priv->next_seqno = 1;
+ }
+
+ *seqno = dev_priv->last_seqno = dev_priv->next_seqno++;
+ return 0;
+}
+
+int i915_gem_request_alloc(struct intel_engine_cs *ring,
+ struct intel_context *ctx,
+ struct drm_i915_gem_request **req_out)
+{
+ struct drm_i915_private *dev_priv = to_i915(ring->dev);
+ unsigned reset_counter = i915_reset_counter(&dev_priv->gpu_error);
+ struct drm_i915_gem_request *req;
+ int ret;
+
+ if (!req_out)
+ return -EINVAL;
+
+ *req_out = NULL;
+
+ /* ABI: Before userspace accesses the GPU (e.g. execbuffer), report
+ * EIO if the GPU is already wedged, or EAGAIN to drop the struct_mutex
+ * and restart.
+ */
+ ret = i915_gem_check_wedge(reset_counter, dev_priv->mm.interruptible);
+ if (ret)
+ return ret;
+
+ req = kmem_cache_zalloc(dev_priv->requests, GFP_KERNEL);
+ if (req == NULL)
+ return -ENOMEM;
+
+ ret = i915_gem_get_seqno(dev_priv, &req->seqno);
+ if (ret)
+ goto err;
+
+ kref_init(&req->ref);
+ req->i915 = dev_priv;
+ req->ring = ring;
+ req->reset_counter = reset_counter;
+ req->ctx = ctx;
+ i915_gem_context_reference(req->ctx);
+
+ if (i915.enable_execlists)
+ ret = intel_logical_ring_alloc_request_extras(req);
+ else
+ ret = intel_ring_alloc_request_extras(req);
+ if (ret) {
+ i915_gem_context_unreference(req->ctx);
+ goto err;
+ }
+
+ /*
+ * Reserve space in the ring buffer for all the commands required to
+ * eventually emit this request. This is to guarantee that the
+ * i915_add_request() call can't fail. Note that the reserve may need
+ * to be redone if the request is not actually submitted straight
+ * away, e.g. because a GPU scheduler has deferred it.
+ */
+ if (i915.enable_execlists)
+ ret = intel_logical_ring_reserve_space(req);
+ else
+ ret = intel_ring_reserve_space(req);
+ if (ret) {
+ /*
+ * At this point, the request is fully allocated even if not
+ * fully prepared. Thus it can be cleaned up using the proper
+ * free code.
+ */
+ i915_gem_request_cancel(req);
+ return ret;
+ }
+
+ *req_out = req;
+ return 0;
+
+err:
+ kmem_cache_free(dev_priv->requests, req);
+ return ret;
+}
+
+void i915_gem_request_cancel(struct drm_i915_gem_request *req)
+{
+ intel_ring_reserved_space_cancel(req->ringbuf);
+
+ i915_gem_request_unreference(req);
+}
+
+int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
+ struct drm_file *file)
+{
+ struct drm_i915_private *dev_private;
+ struct drm_i915_file_private *file_priv;
+
+ WARN_ON(!req || !file || req->file_priv);
+
+ if (!req || !file)
+ return -EINVAL;
+
+ if (req->file_priv)
+ return -EINVAL;
+
+ dev_private = req->ring->dev->dev_private;
+ file_priv = file->driver_priv;
+
+ spin_lock(&file_priv->mm.lock);
+ req->file_priv = file_priv;
+ list_add_tail(&req->client_list, &file_priv->mm.request_list);
+ spin_unlock(&file_priv->mm.lock);
+
+ req->pid = get_pid(task_pid(current));
+
+ return 0;
+}
+
+static inline void
+i915_gem_request_remove_from_client(struct drm_i915_gem_request *request)
+{
+ struct drm_i915_file_private *file_priv = request->file_priv;
+
+ if (!file_priv)
+ return;
+
+ spin_lock(&file_priv->mm.lock);
+ list_del(&request->client_list);
+ request->file_priv = NULL;
+ spin_unlock(&file_priv->mm.lock);
+
+ put_pid(request->pid);
+ request->pid = NULL;
+}
+
+static void i915_gem_request_retire(struct drm_i915_gem_request *request)
+{
+ trace_i915_gem_request_retire(request);
+
+ /* We know the GPU must have read the request to have
+ * sent us the seqno + interrupt, so use the position
+ * of tail of the request to update the last known position
+ * of the GPU head.
+ *
+ * Note this requires that we are always called in request
+ * completion order.
+ */
+ request->ringbuf->last_retired_head = request->postfix;
+
+ list_del_init(&request->list);
+ i915_gem_request_remove_from_client(request);
+
+ i915_gem_request_unreference(request);
+}
+
+void
+i915_gem_request_retire_upto(struct drm_i915_gem_request *req)
+{
+ struct intel_engine_cs *engine = req->ring;
+ struct drm_i915_gem_request *tmp;
+
+ lockdep_assert_held(&engine->dev->struct_mutex);
+
+ if (list_empty(&req->list))
+ return;
+
+ do {
+ tmp = list_first_entry(&engine->request_list,
+ typeof(*tmp), list);
+
+ i915_gem_request_retire(tmp);
+ } while (tmp != req);
+
+ WARN_ON(i915_verify_lists(engine->dev));
+}
+
+static void i915_gem_mark_busy(struct drm_i915_private *dev_priv)
+{
+ if (dev_priv->mm.busy)
+ return;
+
+ intel_runtime_pm_get_noresume(dev_priv);
+
+ i915_update_gfx_val(dev_priv);
+ if (INTEL_INFO(dev_priv)->gen >= 6)
+ gen6_rps_busy(dev_priv);
+
+ queue_delayed_work(dev_priv->wq,
+ &dev_priv->mm.retire_work,
+ round_jiffies_up_relative(HZ));
+
+ dev_priv->mm.busy = true;
+}
+
+/*
+ * NB: This function is not allowed to fail. Doing so would mean the the
+ * request is not being tracked for completion but the work itself is
+ * going to happen on the hardware. This would be a Bad Thing(tm).
+ */
+void __i915_add_request(struct drm_i915_gem_request *request,
+ struct drm_i915_gem_object *obj,
+ bool flush_caches)
+{
+ struct intel_engine_cs *ring;
+ struct drm_i915_private *dev_priv;
+ struct intel_ringbuffer *ringbuf;
+ u32 request_start;
+ int ret;
+
+ if (WARN_ON(request == NULL))
+ return;
+
+ ring = request->ring;
+ dev_priv = ring->dev->dev_private;
+ ringbuf = request->ringbuf;
+
+ /*
+ * To ensure that this call will not fail, space for its emissions
+ * should already have been reserved in the ring buffer. Let the ring
+ * know that it is time to use that space up.
+ */
+ intel_ring_reserved_space_use(ringbuf);
+
+ request_start = intel_ring_get_tail(ringbuf);
+ /*
+ * Emit any outstanding flushes - execbuf can fail to emit the flush
+ * after having emitted the batchbuffer command. Hence we need to fix
+ * things up similar to emitting the lazy request. The difference here
+ * is that the flush _must_ happen before the next request, no matter
+ * what.
+ */
+ if (flush_caches) {
+ if (i915.enable_execlists)
+ ret = logical_ring_flush_all_caches(request);
+ else
+ ret = intel_ring_flush_all_caches(request);
+ /* Not allowed to fail! */
+ WARN(ret, "*_ring_flush_all_caches failed: %d!\n", ret);
+ }
+
+ /* Record the position of the start of the request so that
+ * should we detect the updated seqno part-way through the
+ * GPU processing the request, we never over-estimate the
+ * position of the head.
+ */
+ request->postfix = intel_ring_get_tail(ringbuf);
+
+ if (i915.enable_execlists)
+ ret = ring->emit_request(request);
+ else {
+ ret = ring->add_request(request);
+
+ request->tail = intel_ring_get_tail(ringbuf);
+ }
+ /* Not allowed to fail! */
+ WARN(ret, "emit|add_request failed: %d!\n", ret);
+
+ request->head = request_start;
+
+ /* Whilst this request exists, batch_obj will be on the
+ * active_list, and so will hold the active reference. Only when this
+ * request is retired will the the batch_obj be moved onto the
+ * inactive_list and lose its active reference. Hence we do not need
+ * to explicitly hold another reference here.
+ */
+ request->batch_obj = obj;
+
+ request->emitted_jiffies = jiffies;
+ request->previous_seqno = ring->last_submitted_seqno;
+ ring->last_submitted_seqno = request->seqno;
+ list_add_tail(&request->list, &ring->request_list);
+
+ trace_i915_gem_request_add(request);
+
+ i915_gem_mark_busy(dev_priv);
+
+ /* Sanity check that the reserved size was large enough. */
+ intel_ring_reserved_space_end(ringbuf);
+}
+
+
+static unsigned long local_clock_us(unsigned *cpu)
+{
+ unsigned long t;
+
+ /* Cheaply and approximately convert from nanoseconds to microseconds.
+ * The result and subsequent calculations are also defined in the same
+ * approximate microseconds units. The principal source of timing
+ * error here is from the simple truncation.
+ *
+ * Note that local_clock() is only defined wrt to the current CPU;
+ * the comparisons are no longer valid if we switch CPUs. Instead of
+ * blocking preemption for the entire busywait, we can detect the CPU
+ * switch and use that as indicator of system load and a reason to
+ * stop busywaiting, see busywait_stop().
+ */
+ *cpu = get_cpu();
+ t = local_clock() >> 10;
+ put_cpu();
+
+ return t;
+}
+
+static bool busywait_stop(unsigned long timeout, unsigned cpu)
+{
+ unsigned this_cpu;
+
+ if (time_after(local_clock_us(&this_cpu), timeout))
+ return true;
+
+ return this_cpu != cpu;
+}
+
+static bool __i915_spin_request(struct drm_i915_gem_request *req,
+ struct intel_wait *wait,
+ int state)
+{
+ unsigned long timeout;
+ unsigned cpu;
+
+ /* When waiting for high frequency requests, e.g. during synchronous
+ * rendering split between the CPU and GPU, the finite amount of time
+ * required to set up the irq and wait upon it limits the response
+ * rate. By busywaiting on the request completion for a short while we
+ * can service the high frequency waits as quick as possible. However,
+ * if it is a slow request, we want to sleep as quickly as possible.
+ * The tradeoff between waiting and sleeping is roughly the time it
+ * takes to sleep on a request, on the order of a microsecond.
+ */
+
+ /* Only spin if we know the GPU is processing this request */
+ if (!i915_gem_request_started(req))
+ return false;
+
+ timeout = local_clock_us(&cpu) + 5;
+ do {
+ if (i915_gem_request_completed(req))
+ return true;
+
+ if (signal_pending_state(state, wait->task))
+ break;
+
+ if (busywait_stop(timeout, cpu))
+ break;
+
+ cpu_relax_lowlatency();
+
+ /* Break the loop if we have consumed the timeslice (or been
+ * preempted) or when either the background thread has
+ * enabled the interrupt, or the IRQ itself has fired.
+ */
+ } while (!need_resched() && wait->task->state == state);
+
+ return false;
+}
+
+/**
+ * __i915_wait_request - wait until execution of request has finished
+ * @req: duh!
+ * @interruptible: do an interruptible wait (normally yes)
+ * @timeout: in - how long to wait (NULL forever); out - how much time remaining
+ *
+ * Note: It is of utmost importance that the passed in seqno and reset_counter
+ * values have been read by the caller in an smp safe manner. Where read-side
+ * locks are involved, it is sufficient to read the reset_counter before
+ * unlocking the lock that protects the seqno. For lockless tricks, the
+ * reset_counter _must_ be read before, and an appropriate smp_rmb must be
+ * inserted.
+ *
+ * Returns 0 if the request was found within the alloted time. Else returns the
+ * errno with remaining time filled in timeout argument.
+ */
+int __i915_wait_request(struct drm_i915_gem_request *req,
+ bool interruptible,
+ s64 *timeout,
+ struct intel_rps_client *rps)
+{
+ int state = interruptible ? TASK_INTERRUPTIBLE : TASK_UNINTERRUPTIBLE;
+ struct intel_wait wait;
+ unsigned long timeout_remain;
+ int ret = 0;
+
+ might_sleep();
+
+ if (list_empty(&req->list))
+ return 0;
+
+ if (i915_gem_request_completed(req))
+ return 0;
+
+ timeout_remain = MAX_SCHEDULE_TIMEOUT;
+ if (timeout) {
+ if (WARN_ON(*timeout < 0))
+ return -EINVAL;
+
+ if (*timeout == 0)
+ return -ETIME;
+
+ /* Record current time in case interrupted, or wedged */
+ timeout_remain = nsecs_to_jiffies_timeout(*timeout);
+ *timeout += ktime_get_raw_ns();
+ }
+
+ trace_i915_gem_request_wait_begin(req);
+
+ /* This client is about to stall waiting for the GPU. In many cases
+ * this is undesirable and limits the throughput of the system, as
+ * many clients cannot continue processing user input/output whilst
+ * blocked. RPS autotuning may take tens of milliseconds to respond
+ * to the GPU load and thus incurs additional latency for the client.
+ * We can circumvent that by promoting the GPU frequency to maximum
+ * before we wait. This makes the GPU throttle up much more quickly
+ * (good for benchmarks and user experience, e.g. window animations),
+ * but at a cost of spending more power processing the workload
+ * (bad for battery). Not all clients even want their results
+ * immediately and for them we should just let the GPU select its own
+ * frequency to maximise efficiency. To prevent a single client from
+ * forcing the clocks too high for the whole system, we only allow
+ * each client to waitboost once in a busy period.
+ */
+ if (INTEL_INFO(req->i915)->gen >= 6)
+ gen6_rps_boost(req->i915, rps, req->emitted_jiffies);
+
+ intel_wait_init(&wait, req->seqno);
+ set_task_state(wait.task, state);
+
+ /* Optimistic spin for the next ~jiffie before touching IRQs */
+ if (intel_engine_add_wait(req->ring, &wait)) {
+ if (__i915_spin_request(req, &wait, state))
+ goto complete;
+
+ /* In order to check that we haven't missed the interrupt
+ * as we enabled it, we need to kick ourselves to do a
+ * coherent check on the seqno before we sleep.
+ */
+ if (intel_engine_enable_wait_irq(req->ring, &wait))
+ goto wakeup;
+ }
+
+ for (;;) {
+ if (signal_pending_state(state, wait.task)) {
+ ret = -ERESTARTSYS;
+ break;
+ }
+
+ /* Ensure that even if the GPU hangs, we get woken up. */
+ i915_queue_hangcheck(req->i915);
+
+ timeout_remain = io_schedule_timeout(timeout_remain);
+ if (timeout_remain == 0) {
+ ret = -ETIME;
+ break;
+ }
+
+ if (intel_wait_complete(&wait))
+ break;
+
+wakeup:
+ set_task_state(wait.task, state);
+
+ /* Carefully check if the request is complete, giving time
+ * for the seqno to be visible following the interrupt.
+ * We also have to check in case we are kicked by the GPU
+ * reset in order to drop the struct_mutex.
+ */
+ if (__i915_request_irq_complete(req))
+ break;
+ }
+
+complete:
+ intel_engine_remove_wait(req->ring, &wait);
+ __set_task_state(wait.task, TASK_RUNNING);
+ trace_i915_gem_request_wait_end(req);
+
+ if (timeout) {
+ *timeout -= ktime_get_raw_ns();
+ if (*timeout < 0)
+ *timeout = 0;
+
+ /*
+ * Apparently ktime isn't accurate enough and occasionally has a
+ * bit of mismatch in the jiffies<->nsecs<->ktime loop. So patch
+ * things up to make the test happy. We allow up to 1 jiffy.
+ *
+ * This is a regrssion from the timespec->ktime conversion.
+ */
+ if (ret == -ETIME && *timeout < jiffies_to_usecs(1)*1000)
+ *timeout = 0;
+ }
+
+ if (ret == 0 && rps && req->seqno == req->ring->last_submitted_seqno) {
+ /* The GPU is now idle and this client has stalled.
+ * Since no other client has submitted a request in the
+ * meantime, assume that this client is the only one
+ * supplying work to the GPU but is unable to keep that
+ * work supplied because it is waiting. Since the GPU is
+ * then never kept fully busy, RPS autoclocking will
+ * keep the clocks relatively low, causing further delays.
+ * Compensate by giving the synchronous client credit for
+ * a waitboost next time.
+ */
+ spin_lock(&req->i915->rps.client_lock);
+ list_del_init(&rps->link);
+ spin_unlock(&req->i915->rps.client_lock);
+ }
+
+ return ret;
+}
+
+/**
+ * Waits for a request to be signaled, and cleans up the
+ * request and object lists appropriately for that event.
+ */
+int
+i915_wait_request(struct drm_i915_gem_request *req)
+{
+ struct drm_device *dev;
+ struct drm_i915_private *dev_priv;
+ bool interruptible;
+ int ret;
+
+ BUG_ON(req == NULL);
+
+ dev = req->ring->dev;
+ dev_priv = dev->dev_private;
+ interruptible = dev_priv->mm.interruptible;
+
+ BUG_ON(!mutex_is_locked(&dev->struct_mutex));
+
+ ret = __i915_wait_request(req, interruptible, NULL, NULL);
+ if (ret)
+ return ret;
+
+ i915_gem_request_retire_upto(req);
+ return 0;
+}
+
+void i915_gem_request_free(struct kref *req_ref)
+{
+ struct drm_i915_gem_request *req = container_of(req_ref,
+ typeof(*req), ref);
+ struct intel_context *ctx = req->ctx;
+
+ if (req->file_priv)
+ i915_gem_request_remove_from_client(req);
+
+ if (ctx) {
+ if (i915.enable_execlists) {
+ if (ctx != req->ring->default_context)
+ intel_lr_context_unpin(req);
+ }
+
+ i915_gem_context_unreference(ctx);
+ }
+
+ kmem_cache_free(req->i915->requests, req);
+}
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
new file mode 100644
index 000000000000..d46f22f30b0a
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -0,0 +1,223 @@
+/*
+ * Copyright © 2008-2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#ifndef I915_GEM_REQUEST_H
+#define I915_GEM_REQUEST_H
+
+/**
+ * Request queue structure.
+ *
+ * The request queue allows us to note sequence numbers that have been emitted
+ * and may be associated with active buffers to be retired.
+ *
+ * By keeping this list, we can avoid having to do questionable sequence
+ * number comparisons on buffer last_read|write_seqno. It also allows an
+ * emission time to be associated with the request for tracking how far ahead
+ * of the GPU the submission is.
+ *
+ * The requests are reference counted, so upon creation they should have an
+ * initial reference taken using kref_init
+ */
+struct drm_i915_gem_request {
+ struct kref ref;
+
+ /** On Which ring this request was generated */
+ struct drm_i915_private *i915;
+ struct intel_engine_cs *ring;
+ unsigned reset_counter;
+
+ /** GEM sequence number associated with the previous request,
+ * when the HWS breadcrumb is equal to this the GPU is processing
+ * this request.
+ */
+ u32 previous_seqno;
+
+ /** GEM sequence number associated with this request,
+ * when the HWS breadcrumb is equal or greater than this the GPU
+ * has finished processing this request.
+ */
+ u32 seqno;
+
+ /** Position in the ringbuffer of the start of the request */
+ u32 head;
+
+ /**
+ * Position in the ringbuffer of the start of the postfix.
+ * This is required to calculate the maximum available ringbuffer
+ * space without overwriting the postfix.
+ */
+ u32 postfix;
+
+ /** Position in the ringbuffer of the end of the whole request */
+ u32 tail;
+
+ /**
+ * Context and ring buffer related to this request
+ * Contexts are refcounted, so when this request is associated with a
+ * context, we must increment the context's refcount, to guarantee that
+ * it persists while any request is linked to it. Requests themselves
+ * are also refcounted, so the request will only be freed when the last
+ * reference to it is dismissed, and the code in
+ * i915_gem_request_free() will then decrement the refcount on the
+ * context.
+ */
+ struct intel_context *ctx;
+ struct intel_ringbuffer *ringbuf;
+
+ /** Batch buffer related to this request if any (used for
+ error state dump only) */
+ struct drm_i915_gem_object *batch_obj;
+
+ /** Time at which this request was emitted, in jiffies. */
+ unsigned long emitted_jiffies;
+
+ /** global list entry for this request */
+ struct list_head list;
+
+ struct drm_i915_file_private *file_priv;
+ /** file_priv list entry for this request */
+ struct list_head client_list;
+
+ /** process identifier submitting this request */
+ struct pid *pid;
+
+ /**
+ * The ELSP only accepts two elements at a time, so we queue
+ * context/tail pairs on a given queue (ring->execlist_queue) until the
+ * hardware is available. The queue serves a double purpose: we also use
+ * it to keep track of the up to 2 contexts currently in the hardware
+ * (usually one in execution and the other queued up by the GPU): We
+ * only remove elements from the head of the queue when the hardware
+ * informs us that an element has been completed.
+ *
+ * All accesses to the queue are mediated by a spinlock
+ * (ring->execlist_lock).
+ */
+
+ /** Execlist link in the submission queue.*/
+ struct list_head execlist_link;
+
+ /** Execlists no. of times this request has been sent to the ELSP */
+ int elsp_submitted;
+};
+
+int i915_gem_request_alloc(struct intel_engine_cs *ring,
+ struct intel_context *ctx,
+ struct drm_i915_gem_request **req_out);
+void i915_gem_request_cancel(struct drm_i915_gem_request *req);
+void i915_gem_request_free(struct kref *req_ref);
+int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
+ struct drm_file *file);
+void i915_gem_request_retire_upto(struct drm_i915_gem_request *req);
+
+static inline uint32_t
+i915_gem_request_get_seqno(struct drm_i915_gem_request *req)
+{
+ return req ? req->seqno : 0;
+}
+
+static inline struct intel_engine_cs *
+i915_gem_request_get_ring(struct drm_i915_gem_request *req)
+{
+ return req ? req->ring : NULL;
+}
+
+static inline struct drm_i915_gem_request *
+i915_gem_request_reference(struct drm_i915_gem_request *req)
+{
+ if (req)
+ kref_get(&req->ref);
+ return req;
+}
+
+static inline void
+i915_gem_request_unreference(struct drm_i915_gem_request *req)
+{
+ WARN_ON(!mutex_is_locked(&req->ring->dev->struct_mutex));
+ kref_put(&req->ref, i915_gem_request_free);
+}
+
+static inline void
+i915_gem_request_unreference__unlocked(struct drm_i915_gem_request *req)
+{
+ struct drm_device *dev;
+
+ if (!req)
+ return;
+
+ dev = req->ring->dev;
+ if (kref_put_mutex(&req->ref, i915_gem_request_free, &dev->struct_mutex))
+ mutex_unlock(&dev->struct_mutex);
+}
+
+static inline void i915_gem_request_assign(struct drm_i915_gem_request **pdst,
+ struct drm_i915_gem_request *src)
+{
+ if (src)
+ i915_gem_request_reference(src);
+
+ if (*pdst)
+ i915_gem_request_unreference(*pdst);
+
+ *pdst = src;
+}
+
+void __i915_add_request(struct drm_i915_gem_request *req,
+ struct drm_i915_gem_object *batch_obj,
+ bool flush_caches);
+#define i915_add_request(req) \
+ __i915_add_request(req, NULL, true)
+#define i915_add_request_no_flush(req) \
+ __i915_add_request(req, NULL, false)
+
+struct intel_rps_client;
+
+int __i915_wait_request(struct drm_i915_gem_request *req,
+ bool interruptible,
+ s64 *timeout,
+ struct intel_rps_client *rps);
+int __must_check i915_wait_request(struct drm_i915_gem_request *req);
+
+/**
+ * Returns true if seq1 is later than seq2.
+ */
+static inline bool
+i915_seqno_passed(uint32_t seq1, uint32_t seq2)
+{
+ return (int32_t)(seq1 - seq2) >= 0;
+}
+
+static inline bool i915_gem_request_started(struct drm_i915_gem_request *req)
+{
+ return i915_seqno_passed(intel_ring_get_seqno(req->ring),
+ req->previous_seqno);
+}
+
+static inline bool i915_gem_request_completed(struct drm_i915_gem_request *req)
+{
+ return i915_seqno_passed(intel_ring_get_seqno(req->ring),
+ req->seqno);
+}
+
+#endif /* I915_GEM_REQUEST_H */
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:22 UTC
Permalink
Since requests can no longer be generated as a side-effect of
intel_ring_begin(), we know that the seqno will be unchanged during
ring-emission. This predicatablity then means we do not have to check
for the seqno wrapping around whilst emitting the semaphore for
engine->sync_to().

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem.c | 13 ++-----
drivers/gpu/drm/i915/intel_ringbuffer.c | 67 ++++++++++++++-------------------
drivers/gpu/drm/i915/intel_ringbuffer.h | 5 +--
3 files changed, 33 insertions(+), 52 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 235a3de6e0a0..b0230e7151ce 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2572,22 +2572,15 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
i915_gem_object_retire_request(obj, from);
} else {
int idx = intel_engine_sync_index(from->engine, to->engine);
- u32 seqno = i915_gem_request_get_seqno(from);
-
- if (seqno <= from->engine->semaphore.sync_seqno[idx])
+ if (from->fence.seqno <= from->engine->semaphore.sync_seqno[idx])
return 0;

trace_i915_gem_ring_sync_to(to, from);
- ret = to->engine->semaphore.sync_to(to, from->engine, seqno);
+ ret = to->engine->semaphore.sync_to(to, from);
if (ret)
return ret;

- /* We use last_read_req because sync_to()
- * might have just caused seqno wrap under
- * the radar.
- */
- from->engine->semaphore.sync_seqno[idx] =
- i915_gem_request_get_seqno(obj->last_read_req[from->engine->id]);
+ from->engine->semaphore.sync_seqno[idx] = from->fence.seqno;
}

return 0;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 556e9e2c1fec..d37cdb2f9073 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1384,69 +1384,58 @@ static inline bool i915_gem_has_seqno_wrapped(struct drm_i915_private *dev_priv,
*/

static int
-gen8_ring_sync(struct drm_i915_gem_request *waiter_req,
- struct intel_engine_cs *signaller,
- u32 seqno)
+gen8_ring_sync(struct drm_i915_gem_request *wait,
+ struct drm_i915_gem_request *signal)
{
- struct intel_ring *waiter = waiter_req->ring;
- struct drm_i915_private *dev_priv = waiter_req->i915;
+ struct intel_ring *waiter = wait->ring;
+ struct drm_i915_private *dev_priv = wait->i915;
int ret;

- ret = intel_ring_begin(waiter_req, 4);
+ ret = intel_ring_begin(wait, 4);
if (ret)
return ret;

- intel_ring_emit(waiter, MI_SEMAPHORE_WAIT |
- MI_SEMAPHORE_GLOBAL_GTT |
- MI_SEMAPHORE_POLL |
- MI_SEMAPHORE_SAD_GTE_SDD);
- intel_ring_emit(waiter, seqno);
intel_ring_emit(waiter,
- lower_32_bits(GEN8_WAIT_OFFSET(waiter_req->engine,
- signaller->id)));
+ MI_SEMAPHORE_WAIT |
+ MI_SEMAPHORE_GLOBAL_GTT |
+ MI_SEMAPHORE_POLL |
+ MI_SEMAPHORE_SAD_GTE_SDD);
+ intel_ring_emit(waiter, signal->fence.seqno);
intel_ring_emit(waiter,
- upper_32_bits(GEN8_WAIT_OFFSET(waiter_req->engine,
- signaller->id)));
+ lower_32_bits(GEN8_WAIT_OFFSET(wait->engine,
+ signal->engine->id)));
+ intel_ring_emit(waiter,
+ upper_32_bits(GEN8_WAIT_OFFSET(wait->engine,
+ signal->engine->id)));
intel_ring_advance(waiter);
return 0;
}

static int
-gen6_ring_sync(struct drm_i915_gem_request *waiter_req,
- struct intel_engine_cs *signaller,
- u32 seqno)
+gen6_ring_sync(struct drm_i915_gem_request *wait,
+ struct drm_i915_gem_request *signal)
{
- struct intel_ring *waiter = waiter_req->ring;
+ struct intel_ring *waiter = wait->ring;
u32 dw1 = MI_SEMAPHORE_MBOX |
MI_SEMAPHORE_COMPARE |
MI_SEMAPHORE_REGISTER;
- u32 wait_mbox = signaller->semaphore.mbox.wait[waiter_req->engine->id];
+ u32 wait_mbox = signal->engine->semaphore.mbox.wait[wait->engine->id];
int ret;

- /* Throughout all of the GEM code, seqno passed implies our current
- * seqno is >= the last seqno executed. However for hardware the
- * comparison is strictly greater than.
- */
- seqno -= 1;
-
WARN_ON(wait_mbox == MI_SEMAPHORE_SYNC_INVALID);

- ret = intel_ring_begin(waiter_req, 4);
+ ret = intel_ring_begin(wait, 4);
if (ret)
return ret;

- /* If seqno wrap happened, omit the wait with no-ops */
- if (likely(!i915_gem_has_seqno_wrapped(waiter_req->i915, seqno))) {
- intel_ring_emit(waiter, dw1 | wait_mbox);
- intel_ring_emit(waiter, seqno);
- intel_ring_emit(waiter, 0);
- intel_ring_emit(waiter, MI_NOOP);
- } else {
- intel_ring_emit(waiter, MI_NOOP);
- intel_ring_emit(waiter, MI_NOOP);
- intel_ring_emit(waiter, MI_NOOP);
- intel_ring_emit(waiter, MI_NOOP);
- }
+ intel_ring_emit(waiter, dw1 | wait_mbox);
+ /* Throughout all of the GEM code, seqno passed implies our current
+ * seqno is >= the last seqno executed. However for hardware the
+ * comparison is strictly greater than.
+ */
+ intel_ring_emit(waiter, signal->fence.seqno - 1);
+ intel_ring_emit(waiter, 0);
+ intel_ring_emit(waiter, MI_NOOP);
intel_ring_advance(waiter);

return 0;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 8147ce1379fb..fc9c1e453be1 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -283,9 +283,8 @@ struct intel_engine_cs {
};

/* AKA wait() */
- int (*sync_to)(struct drm_i915_gem_request *to_req,
- struct intel_engine_cs *from,
- u32 seqno);
+ int (*sync_to)(struct drm_i915_gem_request *to,
+ struct drm_i915_gem_request *from);
int (*signal)(struct drm_i915_gem_request *signaller_req,
/* num_dwords needed by caller */
unsigned int num_dwords);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:20 UTC
Permalink
In the reset_counter, we use two bits to track a GPU hang and reset. The
low bit is a "reset-in-progress" flag that we set to signal when we need
to break waiters in order for the recovery task to grab the mutex. As
soon as the recovery task has the mutex, we can clear that flag (which
we do by incrementing the reset_counter thereby incrementing the gobal
reset epoch). By clearing that flag when the recovery task holds the
struct_mutex, we can forgo a second flag that simply tells GEM to ignore
the "reset-in-progress" flag.

The second flag we store in the reset_counter is whether the
reset failed and we consider the GPU terminally wedged. Whilst this flag
is set, all access to the GPU (at least through GEM rather than direct mmio
access) is verboten.

PS: Fun is in store, as in the future we want to move from a global
reset epoch to a per-engine reset engine with request recovery.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: Daniel Vetter <***@ffwll.ch>
Reviewed-by: Daniel Vetter <***@ffwll.ch>
---
drivers/gpu/drm/i915/i915_debugfs.c | 4 ++--
drivers/gpu/drm/i915/i915_drv.c | 39 ++++++++++++++++++++++---------------
drivers/gpu/drm/i915/i915_drv.h | 3 ---
drivers/gpu/drm/i915/i915_gem.c | 27 +++++++++----------------
drivers/gpu/drm/i915/i915_irq.c | 21 ++------------------
5 files changed, 36 insertions(+), 58 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 932af05b8eec..6ff2d23faaa7 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -4696,7 +4696,7 @@ i915_wedged_get(void *data, u64 *val)
struct drm_device *dev = data;
struct drm_i915_private *dev_priv = dev->dev_private;

- *val = i915_reset_counter(&dev_priv->gpu_error);
+ *val = i915_terminally_wedged(&dev_priv->gpu_error);

return 0;
}
@@ -4715,7 +4715,7 @@ i915_wedged_set(void *data, u64 val)
* while it is writing to 'i915_wedged'
*/

- if (i915_reset_in_progress_or_wedged(&dev_priv->gpu_error))
+ if (i915_reset_in_progress(&dev_priv->gpu_error))
return -EAGAIN;

intel_runtime_pm_get(dev_priv);
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 288fec7691dc..2f03379cdb4b 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -873,23 +873,32 @@ int i915_resume_switcheroo(struct drm_device *dev)
int i915_reset(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
- bool simulated;
+ struct i915_gpu_error *error = &dev_priv->gpu_error;
+ unsigned reset_counter;
int ret;

intel_reset_gt_powersave(dev);

mutex_lock(&dev->struct_mutex);

- i915_gem_reset(dev);
+ /* Clear any previous failed attempts at recovery. Time to try again. */
+ atomic_andnot(I915_WEDGED, &error->reset_counter);

- simulated = dev_priv->gpu_error.stop_rings != 0;
+ /* Clear the reset-in-progress flag and increment the reset epoch. */
+ reset_counter = atomic_inc_return(&error->reset_counter);
+ if (WARN_ON(__i915_reset_in_progress(reset_counter))) {
+ ret = -EIO;
+ goto error;
+ }
+
+ i915_gem_reset(dev);

ret = intel_gpu_reset(dev);

/* Also reset the gpu hangman. */
- if (simulated) {
+ if (error->stop_rings != 0) {
DRM_INFO("Simulated gpu hang, resetting stop_rings\n");
- dev_priv->gpu_error.stop_rings = 0;
+ error->stop_rings = 0;
if (ret == -ENODEV) {
DRM_INFO("Reset not implemented, but ignoring "
"error for simulated gpu hangs\n");
@@ -902,8 +911,7 @@ int i915_reset(struct drm_device *dev)

if (ret) {
DRM_ERROR("Failed to reset chip: %i\n", ret);
- mutex_unlock(&dev->struct_mutex);
- return ret;
+ goto error;
}

intel_overlay_reset(dev_priv);
@@ -922,20 +930,14 @@ int i915_reset(struct drm_device *dev)
* was running at the time of the reset (i.e. we weren't VT
* switched away).
*/
-
- /* Used to prevent gem_check_wedged returning -EAGAIN during gpu reset */
- dev_priv->gpu_error.reload_in_reset = true;
-
ret = i915_gem_init_hw(dev);
-
- dev_priv->gpu_error.reload_in_reset = false;
-
- mutex_unlock(&dev->struct_mutex);
if (ret) {
DRM_ERROR("Failed hw init on reset %d\n", ret);
- return ret;
+ goto error;
}

+ mutex_unlock(&dev->struct_mutex);
+
/*
* rps/rc6 re-init is necessary to restore state lost after the
* reset and the re-install of gt irqs. Skip for ironlake per
@@ -946,6 +948,11 @@ int i915_reset(struct drm_device *dev)
intel_enable_gt_powersave(dev);

return 0;
+
+error:
+ atomic_or(I915_WEDGED, &error->reset_counter);
+ mutex_unlock(&dev->struct_mutex);
+ return ret;
}

static int i915_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index b274237726de..60531df3844c 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1381,9 +1381,6 @@ struct i915_gpu_error {

/* For missed irq/seqno simulation. */
unsigned int test_irq_rings;
-
- /* Used to prevent gem_check_wedged returning -EAGAIN during gpu reset */
- bool reload_in_reset;
};

enum modeset_restore {
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 78bf980a69bf..2cdd20b3aeaf 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -83,9 +83,7 @@ i915_gem_wait_for_error(struct i915_gpu_error *error)
{
int ret;

-#define EXIT_COND (!i915_reset_in_progress_or_wedged(error) || \
- i915_terminally_wedged(error))
- if (EXIT_COND)
+ if (!i915_reset_in_progress(error))
return 0;

/*
@@ -94,17 +92,16 @@ i915_gem_wait_for_error(struct i915_gpu_error *error)
* we should simply try to bail out and fail as gracefully as possible.
*/
ret = wait_event_interruptible_timeout(error->reset_queue,
- EXIT_COND,
+ !i915_reset_in_progress(error),
10*HZ);
if (ret == 0) {
DRM_ERROR("Timed out waiting for the gpu reset to complete\n");
return -EIO;
} else if (ret < 0) {
return ret;
+ } else {
+ return 0;
}
-#undef EXIT_COND
-
- return 0;
}

int i915_mutex_lock_interruptible(struct drm_device *dev)
@@ -1112,22 +1109,16 @@ i915_gem_check_wedge(struct i915_gpu_error *error,
bool interruptible)
{
if (i915_reset_in_progress_or_wedged(error)) {
+ /* Recovery complete, but the reset failed ... */
+ if (i915_terminally_wedged(error))
+ return -EIO;
+
/* Non-interruptible callers can't handle -EAGAIN, hence return
* -EIO unconditionally for these. */
if (!interruptible)
return -EIO;

- /* Recovery complete, but the reset failed ... */
- if (i915_terminally_wedged(error))
- return -EIO;
-
- /*
- * Check if GPU Reset is in progress - we need intel_ring_begin
- * to work properly to reinit the hw state while the gpu is
- * still marked as reset-in-progress. Handle this with a flag.
- */
- if (!error->reload_in_reset)
- return -EAGAIN;
+ return -EAGAIN;
}

return 0;
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 9a6b0ac54d01..15973e917566 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -2466,7 +2466,6 @@ static void i915_error_wake_up(struct drm_i915_private *dev_priv,
static void i915_reset_and_wakeup(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = to_i915(dev);
- struct i915_gpu_error *error = &dev_priv->gpu_error;
char *error_event[] = { I915_ERROR_UEVENT "=1", NULL };
char *reset_event[] = { I915_RESET_UEVENT "=1", NULL };
char *reset_done_event[] = { I915_ERROR_UEVENT "=0", NULL };
@@ -2484,7 +2483,7 @@ static void i915_reset_and_wakeup(struct drm_device *dev)
* the reset in-progress bit is only ever set by code outside of this
* work we don't need to worry about any other races.
*/
- if (i915_reset_in_progress_or_wedged(error) && !i915_terminally_wedged(error)) {
+ if (i915_reset_in_progress(&dev_priv->gpu_error)) {
DRM_DEBUG_DRIVER("resetting chip\n");
kobject_uevent_env(&dev->primary->kdev->kobj, KOBJ_CHANGE,
reset_event);
@@ -2512,25 +2511,9 @@ static void i915_reset_and_wakeup(struct drm_device *dev)

intel_runtime_pm_put(dev_priv);

- if (ret == 0) {
- /*
- * After all the gem state is reset, increment the reset
- * counter and wake up everyone waiting for the reset to
- * complete.
- *
- * Since unlock operations are a one-sided barrier only,
- * we need to insert a barrier here to order any seqno
- * updates before
- * the counter increment.
- */
- smp_mb__before_atomic();
- atomic_inc(&dev_priv->gpu_error.reset_counter);
-
+ if (ret == 0)
kobject_uevent_env(&dev->primary->kdev->kobj,
KOBJ_CHANGE, reset_done_event);
- } else {
- atomic_or(I915_WEDGED, &error->reset_counter);
- }

/*
* Note: The wake_up also serves as a memory barrier so that
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:29 UTC
Permalink
One particularly stressful scenario consists of many independent tasks
all competing for GPU time and waiting upon the results (e.g. realtime
transcoding of many, many streams). One bottleneck in particular is that
each client waits on its own results, but every client is woken up after
every batchbuffer - hence the thunder of hooves as then every client must
do its heavyweight dance to read a coherent seqno to see if it is the
lucky one.

Ideally, we only want one client to wake up after the interrupt and
check its request for completion. Since the requests must retire in
order, we can select the first client on the oldest request to be woken.
Once that client has completed his wait, we can then wake up the
next client and so on. However, all clients then incur latency as every
process in the chain may be delayed for scheduling - this may also then
cause some priority inversion. To reduce the latency, when a client
is added or removed from the list, we scan the tree for completed
seqno and wake up all the completed waiters in parallel.

Using igt/benchmarks/gem_latency, we can demonstrate this effect. The
benchmark measures the number of GPU cycles between completion of a
batch and the client waking up from a call to wait-ioctl. With many
concurrent waiters, with each on a different request, we observe that
the wakeup latency before the patch scales nearly linearly with the
number of waiters (before external factors kick in making the scaling much
worse). After applying the patch, we can see that only the single waiter
for the request is being woken up, providing a constant wakeup latency
for every operation. However, the situation is not quite as rosy for
many waiters on the same request, though to the best of my knowledge this
is much less likely in practice. Here, we can observe that the
concurrent waiters incur extra latency from being woken up by the
solitary bottom-half, rather than directly by the interrupt. This
appears to be scheduler induced (having discounted adverse effects from
having a rbtree walk/erase in the wakeup path), each additional
wake_up_process() costs aproximately 1us on big core. Another effect of
performing the secondary wakeups from the first bottom-half is the
incurred delay this imposes on high priority threads - rather than
immediately returning to userspace and leaving the interrupt handler to
wake the others.

To offset the delay incurred with additional waiters on a request, we
could use a hybrid scheme that did a quick read in the interrupt handler
and dequeued all the completed waiters (incurring the overhead in the
interrupt handler, not the best plan either as we then incur GPU
submission latency) but we would still have to wake up the bottom-half
everytime to do the heavyweight slow read. Or we could only kick the
waiters on the seqno with the same priority as the current task (i.e. in
the realtime waiter scenario, only it is woken up immediately by the
interrupt and simply queues the next waiter before returning to userspace,
minimising its delay at the expense of the chain, and also reducing
contention on its scheduler runqueue). This is effective at avoid long
pauses in the interrupt handler and at avoiding the extra latency in
realtime/high-priority waiters.

v2: Convert from a kworker per engine into a dedicated kthread for the
bottom-half.
v3: Rename request members and tweak comments.
v4: Use a per-engine spinlock in the breadcrumbs bottom-half.
v5: Fix race in locklessly checking waiter status and kicking the task on
adding a new waiter.
v6: Fix deciding when to force the timer to hide missing interrupts.
v7: Move the bottom-half from the kthread to the first client process.
v8: Reword a few comments
v9: Break the busy loop when the interrupt is unmasked or has fired.
v10: Comments, unnecessary churn, better debugging from Tvrtko
v11: Wake all completed waiters on removing the current bottom-half to
reduce the latency of waking up a herd of clients all waiting on the
same request.
v12: Rearrange missed-interrupt fault injection so that it works with
igt/drv_missed_irq_hang
v13: Rename intel_breadcrumb and friends to intel_wait in preparation
for signal handling.
v14: RCU commentary, assert_spin_locked
v15: Hide BUG_ON behind the compiler; report on gem_latency findings.
v16: Sort seqno-groups by priority so that first-waiter has the highest
task priority (and so avoid priority inversion).

Testcase: igt/gem_concurrent_blit
Testcase: igt/benchmarks/gem_latency
Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: "Rogozhkin, Dmitry V" <***@intel.com>
Cc: "Gong, Zhipeng" <***@intel.com>
Cc: Tvrtko Ursulin <***@linux.intel.com>
Cc: Dave Gordon <***@intel.com>
---
drivers/gpu/drm/i915/Makefile | 1 +
drivers/gpu/drm/i915/i915_debugfs.c | 19 +-
drivers/gpu/drm/i915/i915_drv.h | 32 ++-
drivers/gpu/drm/i915/i915_gem.c | 141 +++++--------
drivers/gpu/drm/i915/i915_gpu_error.c | 2 +-
drivers/gpu/drm/i915/i915_irq.c | 20 +-
drivers/gpu/drm/i915/intel_breadcrumbs.c | 336 +++++++++++++++++++++++++++++++
drivers/gpu/drm/i915/intel_lrc.c | 5 +-
drivers/gpu/drm/i915/intel_ringbuffer.c | 5 +-
drivers/gpu/drm/i915/intel_ringbuffer.h | 69 ++++++-
10 files changed, 521 insertions(+), 109 deletions(-)
create mode 100644 drivers/gpu/drm/i915/intel_breadcrumbs.c

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index 1e9895b9a546..99ce591c8574 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -37,6 +37,7 @@ i915-y += i915_cmd_parser.o \
i915_gem_userptr.o \
i915_gpu_error.o \
i915_trace_points.o \
+ intel_breadcrumbs.o \
intel_lrc.o \
intel_mocs.o \
intel_ringbuffer.o \
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 6ff2d23faaa7..9396597b136d 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -730,10 +730,22 @@ static int i915_gem_request_info(struct seq_file *m, void *data)
static void i915_ring_seqno_info(struct seq_file *m,
struct intel_engine_cs *ring)
{
+ struct rb_node *rb;
+
if (ring->get_seqno) {
seq_printf(m, "Current sequence (%s): %x\n",
ring->name, ring->get_seqno(ring, false));
}
+
+ spin_lock(&ring->breadcrumbs.lock);
+ for (rb = rb_first(&ring->breadcrumbs.waiters);
+ rb != NULL;
+ rb = rb_next(rb)) {
+ struct intel_wait *w = container_of(rb, typeof(*w), node);
+ seq_printf(m, "Waiting (%s): %s [%d] on %x\n",
+ ring->name, w->task->comm, w->task->pid, w->seqno);
+ }
+ spin_unlock(&ring->breadcrumbs.lock);
}

static int i915_gem_seqno_info(struct seq_file *m, void *data)
@@ -1359,8 +1371,9 @@ static int i915_hangcheck_info(struct seq_file *m, void *unused)

for_each_ring(ring, dev_priv, i) {
seq_printf(m, "%s:\n", ring->name);
- seq_printf(m, "\tseqno = %x [current %x]\n",
- ring->hangcheck.seqno, seqno[i]);
+ seq_printf(m, "\tseqno = %x [current %x], waiters? %d\n",
+ ring->hangcheck.seqno, seqno[i],
+ intel_engine_has_waiter(ring));
seq_printf(m, "\tACTHD = 0x%08llx [current 0x%08llx]\n",
(long long)ring->hangcheck.acthd,
(long long)acthd[i]);
@@ -2346,7 +2359,7 @@ static int count_irq_waiters(struct drm_i915_private *i915)
int i;

for_each_ring(ring, i915, i)
- count += ring->irq_refcount;
+ count += intel_engine_has_waiter(ring);

return count;
}
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 201dd330f66a..a9e8de57e848 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1379,7 +1379,7 @@ struct i915_gpu_error {
#define I915_STOP_RING_ALLOW_WARN (1 << 30)

/* For missed irq/seqno simulation. */
- unsigned int test_irq_rings;
+ unsigned long test_irq_rings;
};

enum modeset_restore {
@@ -2813,7 +2813,6 @@ ibx_disable_display_interrupt(struct drm_i915_private *dev_priv, uint32_t bits)
ibx_display_interrupt_update(dev_priv, bits, 0);
}

-
/* i915_gem.c */
int i915_gem_create_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
@@ -3631,4 +3630,33 @@ static inline void i915_trace_irq_get(struct intel_engine_cs *ring,
i915_gem_request_assign(&ring->trace_irq_req, req);
}

+static inline bool __i915_request_irq_complete(struct drm_i915_gem_request *req)
+{
+ /* Ensure our read of the seqno is coherent so that we
+ * do not "miss an interrupt" (i.e. if this is the last
+ * request and the seqno write from the GPU is not visible
+ * by the time the interrupt fires, we will see that the
+ * request is incomplete and go back to sleep awaiting
+ * another interrupt that will never come.)
+ *
+ * Strictly, we only need to do this once after an interrupt,
+ * but it is easier and safer to do it every time the waiter
+ * is woken.
+ */
+ if (i915_gem_request_completed(req, false))
+ return true;
+
+ /* We need to check whether any gpu reset happened in between
+ * the request being submitted and now. If a reset has occurred,
+ * the request is effectively complete (we either are in the
+ * process of or have discarded the rendering and completely
+ * reset the GPU. The results of the request are lost and we
+ * are free to continue on with the original operation.
+ */
+ if (req->reset_counter != i915_reset_counter(&req->i915->gpu_error))
+ return true;
+
+ return false;
+}
+
#endif
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index b4da8b354a3b..4b26529f1f44 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1121,17 +1121,6 @@ i915_gem_check_wedge(unsigned reset_counter, bool interruptible)
return 0;
}

-static void fake_irq(unsigned long data)
-{
- wake_up_process((struct task_struct *)data);
-}
-
-static bool missed_irq(struct drm_i915_private *dev_priv,
- struct intel_engine_cs *ring)
-{
- return test_bit(ring->id, &dev_priv->gpu_error.missed_irq_rings);
-}
-
static unsigned long local_clock_us(unsigned *cpu)
{
unsigned long t;
@@ -1164,7 +1153,9 @@ static bool busywait_stop(unsigned long timeout, unsigned cpu)
return this_cpu != cpu;
}

-static int __i915_spin_request(struct drm_i915_gem_request *req, int state)
+static bool __i915_spin_request(struct drm_i915_gem_request *req,
+ struct intel_wait *wait,
+ int state)
{
unsigned long timeout;
unsigned cpu;
@@ -1179,31 +1170,30 @@ static int __i915_spin_request(struct drm_i915_gem_request *req, int state)
* takes to sleep on a request, on the order of a microsecond.
*/

- if (req->ring->irq_refcount)
- return -EBUSY;
-
/* Only spin if we know the GPU is processing this request */
if (!i915_gem_request_started(req, true))
- return -EAGAIN;
+ return false;

timeout = local_clock_us(&cpu) + 5;
- while (!need_resched()) {
+ do {
if (i915_gem_request_completed(req, true))
- return 0;
+ return true;

- if (signal_pending_state(state, current))
+ if (signal_pending_state(state, wait->task))
break;

if (busywait_stop(timeout, cpu))
break;

cpu_relax_lowlatency();
- }

- if (i915_gem_request_completed(req, false))
- return 0;
+ /* Break the loop if we have consumed the timeslice (or been
+ * preempted) or when either the background thread has
+ * enabled the interrupt, or the IRQ itself has fired.
+ */
+ } while (!need_resched() && wait->task->state == state);

- return -EAGAIN;
+ return false;
}

/**
@@ -1227,18 +1217,13 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
s64 *timeout,
struct intel_rps_client *rps)
{
- struct intel_engine_cs *ring = i915_gem_request_get_ring(req);
- struct drm_device *dev = ring->dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- const bool irq_test_in_progress =
- ACCESS_ONCE(dev_priv->gpu_error.test_irq_rings) & intel_ring_flag(ring);
int state = interruptible ? TASK_INTERRUPTIBLE : TASK_UNINTERRUPTIBLE;
- DEFINE_WAIT(wait);
- unsigned long timeout_expire;
+ struct intel_wait wait;
+ unsigned long timeout_remain;
s64 before, now;
- int ret;
+ int ret = 0;

- WARN(!intel_irqs_enabled(dev_priv), "IRQs disabled");
+ might_sleep();

if (list_empty(&req->list))
return 0;
@@ -1246,7 +1231,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
if (i915_gem_request_completed(req, true))
return 0;

- timeout_expire = 0;
+ timeout_remain = MAX_SCHEDULE_TIMEOUT;
if (timeout) {
if (WARN_ON(*timeout < 0))
return -EINVAL;
@@ -1254,83 +1239,65 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
if (*timeout == 0)
return -ETIME;

- timeout_expire = jiffies + nsecs_to_jiffies_timeout(*timeout);
+ timeout_remain = nsecs_to_jiffies_timeout(*timeout);
}

- if (INTEL_INFO(dev_priv)->gen >= 6)
- gen6_rps_boost(dev_priv, rps, req->emitted_jiffies);
-
/* Record current time in case interrupted by signal, or wedged */
trace_i915_gem_request_wait_begin(req);
before = ktime_get_raw_ns();

- /* Optimistic spin for the next jiffie before touching IRQs */
- ret = __i915_spin_request(req, state);
- if (ret == 0)
- goto out;
+ if (INTEL_INFO(req->i915)->gen >= 6)
+ gen6_rps_boost(req->i915, rps, req->emitted_jiffies);

- if (!irq_test_in_progress && WARN_ON(!ring->irq_get(ring))) {
- ret = -ENODEV;
- goto out;
- }
-
- for (;;) {
- struct timer_list timer;
+ intel_wait_init(&wait, req->seqno);
+ set_task_state(wait.task, state);

- prepare_to_wait(&ring->irq_queue, &wait, state);
+ /* Optimistic spin for the next ~jiffie before touching IRQs */
+ if (intel_engine_add_wait(req->ring, &wait)) {
+ if (__i915_spin_request(req, &wait, state))
+ goto complete;

- /* We need to check whether any gpu reset happened in between
- * the request being submitted and now. If a reset has occurred,
- * the request is effectively complete (we either are in the
- * process of or have discarded the rendering and completely
- * reset the GPU. The results of the request are lost and we
- * are free to continue on with the original operation.
+ /* In order to check that we haven't missed the interrupt
+ * as we enabled it, we need to kick ourselves to do a
+ * coherent check on the seqno before we sleep.
*/
- if (req->reset_counter != i915_reset_counter(&dev_priv->gpu_error)) {
- ret = 0;
- break;
- }
-
- if (i915_gem_request_completed(req, false)) {
- ret = 0;
- break;
- }
+ if (intel_engine_enable_wait_irq(req->ring, &wait))
+ goto wakeup;
+ }

- if (signal_pending_state(state, current)) {
+ for (;;) {
+ if (signal_pending_state(state, wait.task)) {
ret = -ERESTARTSYS;
break;
}

- if (timeout && time_after_eq(jiffies, timeout_expire)) {
+ /* Ensure that even if the GPU hangs, we get woken up. */
+ i915_queue_hangcheck(req->i915);
+
+ timeout_remain = io_schedule_timeout(timeout_remain);
+ if (timeout_remain == 0) {
ret = -ETIME;
break;
}

- /* Ensure that even if the GPU hangs, we get woken up. */
- i915_queue_hangcheck(dev_priv);
-
- timer.function = NULL;
- if (timeout || missed_irq(dev_priv, ring)) {
- unsigned long expire;
-
- setup_timer_on_stack(&timer, fake_irq, (unsigned long)current);
- expire = missed_irq(dev_priv, ring) ? jiffies + 1 : timeout_expire;
- mod_timer(&timer, expire);
- }
+ if (intel_wait_complete(&wait))
+ break;

- io_schedule();
+wakeup:
+ set_task_state(wait.task, state);

- if (timer.function) {
- del_singleshot_timer_sync(&timer);
- destroy_timer_on_stack(&timer);
- }
+ /* Carefully check if the request is complete, giving time
+ * for the seqno to be visible following the interrupt.
+ * We also have to check in case we are kicked by the GPU
+ * reset in order to drop the struct_mutex.
+ */
+ if (__i915_request_irq_complete(req))
+ break;
}
- if (!irq_test_in_progress)
- ring->irq_put(ring);
-
- finish_wait(&ring->irq_queue, &wait);

-out:
+complete:
+ intel_engine_remove_wait(req->ring, &wait);
+ __set_task_state(wait.task, TASK_RUNNING);
now = ktime_get_raw_ns();
trace_i915_gem_request_wait_end(req);

diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 06ca4082735b..f805d117f3d1 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -900,7 +900,7 @@ static void i915_record_ring_state(struct drm_device *dev,
ering->instdone = I915_READ(GEN2_INSTDONE);
}

- ering->waiting = waitqueue_active(&ring->irq_queue);
+ ering->waiting = intel_engine_has_waiter(ring);
ering->instpm = I915_READ(RING_INSTPM(ring->mmio_base));
ering->seqno = ring->get_seqno(ring, false);
ering->acthd = intel_ring_get_active_head(ring);
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 2a8a9694eec5..95b997a57da8 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -1000,8 +1000,7 @@ static void notify_ring(struct intel_engine_cs *ring)
return;

trace_i915_gem_request_notify(ring);
-
- wake_up_all(&ring->irq_queue);
+ intel_engine_wakeup(ring);
}

static void vlv_c0_read(struct drm_i915_private *dev_priv,
@@ -1083,7 +1082,7 @@ static bool any_waiters(struct drm_i915_private *dev_priv)
int i;

for_each_ring(ring, dev_priv, i)
- if (ring->irq_refcount)
+ if (intel_engine_has_waiter(ring))
return true;

return false;
@@ -2431,9 +2430,6 @@ out:
static void i915_error_wake_up(struct drm_i915_private *dev_priv,
bool reset_completed)
{
- struct intel_engine_cs *ring;
- int i;
-
/*
* Notify all waiters for GPU completion events that reset state has
* been changed, and that they need to restart their wait after
@@ -2441,9 +2437,8 @@ static void i915_error_wake_up(struct drm_i915_private *dev_priv,
* a gpu reset pending so that i915_error_work_func can acquire them).
*/

- /* Wake up __wait_seqno, potentially holding dev->struct_mutex. */
- for_each_ring(ring, dev_priv, i)
- wake_up_all(&ring->irq_queue);
+ /* Wake up i915_wait_request, potentially holding dev->struct_mutex. */
+ intel_kick_waiters(dev_priv);

/* Wake up intel_crtc_wait_for_pending_flips, holding crtc->mutex. */
wake_up_all(&dev_priv->pending_flip_queue);
@@ -3079,16 +3074,17 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
if (ring_idle(ring, seqno)) {
ring->hangcheck.action = HANGCHECK_IDLE;

- if (waitqueue_active(&ring->irq_queue)) {
+ if (intel_engine_has_waiter(ring)) {
/* Issue a wake-up to catch stuck h/w. */
if (!test_and_set_bit(ring->id, &dev_priv->gpu_error.missed_irq_rings)) {
- if (!(dev_priv->gpu_error.test_irq_rings & intel_ring_flag(ring)))
+ if (!test_bit(ring->id, &dev_priv->gpu_error.test_irq_rings))
DRM_ERROR("Hangcheck timer elapsed... %s idle\n",
ring->name);
else
DRM_INFO("Fake missed irq on %s\n",
ring->name);
- wake_up_all(&ring->irq_queue);
+
+ intel_engine_enable_fake_irq(ring);
}
/* Safeguard against driver failure */
ring->hangcheck.score += BUSY;
diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
new file mode 100644
index 000000000000..9f756583a44e
--- /dev/null
+++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
@@ -0,0 +1,336 @@
+/*
+ * Copyright © 2015 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice (including the next
+ * paragraph) shall be included in all copies or substantial portions of the
+ * Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+ * IN THE SOFTWARE.
+ *
+ */
+
+#include "i915_drv.h"
+
+static void intel_breadcrumbs_fake_irq(unsigned long data)
+{
+ struct intel_engine_cs *engine = (struct intel_engine_cs *)data;
+
+ /*
+ * The timer persists in case we cannot enable interrupts,
+ * or if we have previously seen seqno/interrupt incoherency
+ * ("missed interrupt" syndrome). Here the worker will wake up
+ * every jiffie in order to kick the oldest waiter to do the
+ * coherent seqno check.
+ */
+ rcu_read_lock();
+ if (intel_engine_wakeup(engine))
+ mod_timer(&engine->breadcrumbs.fake_irq, jiffies + 1);
+ rcu_read_unlock();
+}
+
+static void irq_enable(struct intel_engine_cs *engine)
+{
+ WARN_ON(!engine->irq_get(engine));
+}
+
+static void irq_disable(struct intel_engine_cs *engine)
+{
+ engine->irq_put(engine);
+}
+
+static bool __intel_breadcrumbs_enable_irq(struct intel_breadcrumbs *b)
+{
+ struct intel_engine_cs *engine =
+ container_of(b, struct intel_engine_cs, breadcrumbs);
+ bool noirq;
+
+ assert_spin_locked(&b->lock);
+ if (b->rpm_wakelock)
+ return false;
+
+ /* Since we are waiting on a request, the GPU should be busy
+ * and should have its own rpm reference. For completeness,
+ * record an rpm reference for ourselves to cover the
+ * interrupt we unmask.
+ */
+ intel_runtime_pm_get_noresume(engine->i915);
+ b->rpm_wakelock = true;
+
+ /* No interrupts? Kick the waiter every jiffie! */
+ noirq = true;
+ if (intel_irqs_enabled(engine->i915)) {
+ noirq = test_bit(engine->id,
+ &engine->i915->gpu_error.missed_irq_rings);
+ if (!test_bit(engine->id,
+ &engine->i915->gpu_error.test_irq_rings)) {
+ irq_enable(engine);
+ b->irq_enabled = true;
+ }
+ }
+ if (noirq)
+ mod_timer(&b->fake_irq, jiffies + 1);
+
+ return b->irq_enabled;
+}
+
+static void __intel_breadcrumbs_disable_irq(struct intel_breadcrumbs *b)
+{
+ struct intel_engine_cs *engine =
+ container_of(b, struct intel_engine_cs, breadcrumbs);
+
+ assert_spin_locked(&b->lock);
+ if (!b->rpm_wakelock)
+ return;
+
+ if (b->irq_enabled) {
+ irq_disable(engine);
+ b->irq_enabled = false;
+ }
+
+ intel_runtime_pm_put(engine->i915);
+ b->rpm_wakelock = false;
+}
+
+static inline struct intel_wait *to_wait(struct rb_node *node)
+{
+ return container_of(node, struct intel_wait, node);
+}
+
+static inline void __intel_breadcrumbs_finish(struct intel_breadcrumbs *b,
+ struct intel_wait *wait)
+{
+ assert_spin_locked(&b->lock);
+
+ /* This request is completed, so remove it from the tree, mark it as
+ * complete, and *then* wake up the associated task.
+ */
+ rb_erase(&wait->node, &b->waiters);
+ RB_CLEAR_NODE(&wait->node);
+
+ wake_up_process(wait->task); /* implicit smp_wmb() */
+}
+
+bool intel_engine_add_wait(struct intel_engine_cs *engine,
+ struct intel_wait *wait)
+{
+ struct intel_breadcrumbs *b = &engine->breadcrumbs;
+ u32 seqno = engine->get_seqno(engine, true);
+ struct rb_node **p, *parent, *completed;
+ bool first;
+
+ spin_lock(&b->lock);
+
+ /* Insert the request into the retirment ordered list
+ * of waiters by walking the rbtree. If we are the oldest
+ * seqno in the tree (the first to be retired), then
+ * set ourselves as the bottom-half.
+ *
+ * As we descend the tree, prune completed branches since we hold the
+ * spinlock we know that the first_waiter must be delayed and can
+ * reduce some of the sequential wake up latency if we take action
+ * ourselves and wake up the copmleted tasks in parallel. Also, by
+ * removing stale elements in the tree, we may be able to reduce the
+ * ping-pong between the old bottom-half and ourselves as first-waiter.
+ */
+ first = true;
+ parent = NULL;
+ completed = NULL;
+ p = &b->waiters.rb_node;
+ while (*p) {
+ parent = *p;
+ if (wait->seqno == to_wait(parent)->seqno) {
+ /* We have multiple waiters on the same seqno, select
+ * the highest priority task (that with the smallest
+ * task->prio) to serve as the bottom-half for this
+ * group.
+ */
+ if (wait->task->prio > to_wait(parent)->task->prio) {
+ p = &parent->rb_right;
+ first = false;
+ } else
+ p = &parent->rb_left;
+ } else if (i915_seqno_passed(wait->seqno,
+ to_wait(parent)->seqno)) {
+ p = &parent->rb_right;
+ if (i915_seqno_passed(seqno, to_wait(parent)->seqno))
+ completed = parent;
+ else
+ first = false;
+ } else
+ p = &parent->rb_left;
+ }
+ rb_link_node(&wait->node, parent, p);
+ rb_insert_color(&wait->node, &b->waiters);
+
+ if (completed != NULL) {
+ struct rb_node *next = rb_next(completed);
+
+ if (next && next != &wait->node) {
+ GEM_BUG_ON(first);
+ smp_store_mb(b->first_waiter, to_wait(next)->task);
+ /* If we enable the IRQ, we may have missed the
+ * interrupt for that seqno, so we have to wake up
+ * that bottom-half in order to do a coherent check
+ * in case the seqno passed.
+ */
+ if (__intel_breadcrumbs_enable_irq(b))
+ wake_up_process(to_wait(next)->task);
+ }
+
+ do {
+ struct intel_wait *crumb = to_wait(completed);
+ completed = rb_prev(completed);
+ __intel_breadcrumbs_finish(b, crumb);
+ } while (completed != NULL);
+ }
+
+ if (first)
+ smp_store_mb(b->first_waiter, wait->task);
+ GEM_BUG_ON(b->first_waiter == NULL);
+
+ spin_unlock(&b->lock);
+
+ return first;
+}
+
+bool intel_engine_enable_wait_irq(struct intel_engine_cs *engine,
+ const struct intel_wait *wait)
+{
+ struct intel_breadcrumbs *b = &engine->breadcrumbs;
+ bool first = false;
+
+ spin_lock(&b->lock);
+ if (b->first_waiter == wait->task)
+ first =__intel_breadcrumbs_enable_irq(b);
+ spin_unlock(&b->lock);
+
+ return first;
+}
+
+void intel_engine_enable_fake_irq(struct intel_engine_cs *engine)
+{
+ mod_timer(&engine->breadcrumbs.fake_irq, jiffies + 1);
+}
+
+static inline bool chain_wakeup(struct rb_node *rb, int priority)
+{
+ return rb && to_wait(rb)->task->prio <= priority;
+}
+
+void intel_engine_remove_wait(struct intel_engine_cs *engine,
+ struct intel_wait *wait)
+{
+ struct intel_breadcrumbs *b = &engine->breadcrumbs;
+
+ /* Quick check to see if this waiter was already decoupled from
+ * the tree by the bottom-half to avoid contention on the spinlock
+ * by the herd.
+ */
+ if (RB_EMPTY_NODE(&wait->node))
+ return;
+
+ spin_lock(&b->lock);
+
+ if (b->first_waiter == wait->task) {
+ struct rb_node *next;
+ struct task_struct *task;
+ const int priority = wait->task->prio;
+
+ /* We are the current bottom-half. Find the next candidate,
+ * the first waiter in the queue on the remaining oldest
+ * request. As multiple seqnos may complete in the time it
+ * takes us to wake up and find the next waiter, we have to
+ * wake up that waiter for it to perform its own coherent
+ * completion check.
+ */
+ next = rb_next(&wait->node);
+ if (chain_wakeup(next, priority)) {
+ /* If the next waiter is already complete,
+ * wake it up and continue onto the next waiter. So
+ * if have a small herd, they will wake up in parallel
+ * rather than sequentially, which should reduce
+ * the overall latency in waking all the completed
+ * clients.
+ *
+ * However, waking up a chain adds extra latency to
+ * the first_waiter. This is undesirable if that
+ * waiter is a high priority task.
+ */
+ u32 seqno = engine->get_seqno(engine, true);
+ while (i915_seqno_passed(seqno,
+ to_wait(next)->seqno)) {
+ struct rb_node *n = rb_next(next);
+ __intel_breadcrumbs_finish(b, to_wait(next));
+ next = n;
+ if (!chain_wakeup(next, priority))
+ break;
+ }
+ }
+ task = next ? to_wait(next)->task : NULL;
+
+ smp_store_mb(b->first_waiter, task);
+ if (task) {
+ /* In our haste, we may have completed the first waiter
+ * before we enabled the interrupt. Do so now as we
+ * have a second waiter for a future seqno. Afterwards,
+ * we have to wake up that waiter in case we missed
+ * the interrupt, or if we have to handle an
+ * exception rather than a seqno completion.
+ */
+ if (to_wait(next)->seqno != wait->seqno)
+ __intel_breadcrumbs_enable_irq(b);
+ wake_up_process(task);
+ } else
+ __intel_breadcrumbs_disable_irq(b);
+ }
+
+ if (!RB_EMPTY_NODE(&wait->node))
+ rb_erase(&wait->node, &b->waiters);
+ spin_unlock(&b->lock);
+}
+
+void intel_engine_init_breadcrumbs(struct intel_engine_cs *engine)
+{
+ struct intel_breadcrumbs *b = &engine->breadcrumbs;
+
+ spin_lock_init(&b->lock);
+ setup_timer(&b->fake_irq,
+ intel_breadcrumbs_fake_irq,
+ (unsigned long)engine);
+}
+
+void intel_engine_fini_breadcrumbs(struct intel_engine_cs *engine)
+{
+ struct intel_breadcrumbs *b = &engine->breadcrumbs;
+
+ del_timer_sync(&b->fake_irq);
+}
+
+void intel_kick_waiters(struct drm_i915_private *i915)
+{
+ struct intel_engine_cs *engine;
+ int i;
+
+ /* To avoid the task_struct disappearing beneath us as we wake up
+ * the process, we must first inspect the task_struct->state under the
+ * RCU lock, i.e. as we call wake_up_process() we must be holding the
+ * rcu_read_lock().
+ */
+ rcu_read_lock();
+ for_each_ring(engine, i915, i)
+ intel_engine_wakeup(engine);
+ rcu_read_unlock();
+}
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 32644338e6f8..16fa58a0a930 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1928,6 +1928,8 @@ void intel_logical_ring_cleanup(struct intel_engine_cs *ring)
i915_cmd_parser_fini_ring(ring);
i915_gem_batch_pool_fini(&ring->batch_pool);

+ intel_engine_fini_breadcrumbs(ring);
+
if (ring->status_page.obj) {
kunmap(sg_page(ring->status_page.obj->pages->sgl));
ring->status_page.obj = NULL;
@@ -1945,10 +1947,11 @@ static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *rin
ring->buffer = NULL;

ring->dev = dev;
+ ring->i915 = to_i915(dev);
INIT_LIST_HEAD(&ring->active_list);
INIT_LIST_HEAD(&ring->request_list);
i915_gem_batch_pool_init(dev, &ring->batch_pool);
- init_waitqueue_head(&ring->irq_queue);
+ intel_engine_init_breadcrumbs(ring);

INIT_LIST_HEAD(&ring->buffers);
INIT_LIST_HEAD(&ring->execlist_queue);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index a1d43b2c7077..60b0df2c5399 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -2152,6 +2152,7 @@ static int intel_init_ring_buffer(struct drm_device *dev,
WARN_ON(ring->buffer);

ring->dev = dev;
+ ring->i915 = to_i915(dev);
INIT_LIST_HEAD(&ring->active_list);
INIT_LIST_HEAD(&ring->request_list);
INIT_LIST_HEAD(&ring->execlist_queue);
@@ -2159,7 +2160,7 @@ static int intel_init_ring_buffer(struct drm_device *dev,
i915_gem_batch_pool_init(dev, &ring->batch_pool);
memset(ring->semaphore.sync_seqno, 0, sizeof(ring->semaphore.sync_seqno));

- init_waitqueue_head(&ring->irq_queue);
+ intel_engine_init_breadcrumbs(ring);

ringbuf = intel_engine_create_ringbuffer(ring, 32 * PAGE_SIZE);
if (IS_ERR(ringbuf)) {
@@ -2223,6 +2224,8 @@ void intel_cleanup_ring_buffer(struct intel_engine_cs *ring)

i915_cmd_parser_fini_ring(ring);
i915_gem_batch_pool_fini(&ring->batch_pool);
+ intel_engine_fini_breadcrumbs(ring);
+
ring->dev = NULL;
}

diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 7349d9258191..51fcb66bfc4a 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -158,9 +158,35 @@ struct intel_engine_cs {
#define LAST_USER_RING (VECS + 1)
u32 mmio_base;
struct drm_device *dev;
+ struct drm_i915_private *i915;
struct intel_ringbuffer *buffer;
struct list_head buffers;

+ /* Rather than have every client wait upon all user interrupts,
+ * with the herd waking after every interrupt and each doing the
+ * heavyweight seqno dance, we delegate the task (of being the
+ * bottom-half of the user interrupt) to the first client. After
+ * every interrupt, we wake up one client, who does the heavyweight
+ * coherent seqno read and either goes back to sleep (if incomplete),
+ * or wakes up all the completed clients in parallel, before then
+ * transferring the bottom-half status to the next client in the queue.
+ *
+ * Compared to walking the entire list of waiters in a single dedicated
+ * bottom-half, we reduce the latency of the first waiter by avoiding
+ * a context switch, but incur additional coherent seqno reads when
+ * following the chain of request breadcrumbs. Since it is most likely
+ * that we have a single client waiting on each seqno, then reducing
+ * the overhead of waking that client is much preferred.
+ */
+ struct intel_breadcrumbs {
+ spinlock_t lock; /* protects the lists of requests */
+ struct rb_root waiters; /* sorted by retirement, priority */
+ struct task_struct *first_waiter; /* bh for user interrupts */
+ struct timer_list fake_irq; /* used after a missed interrupt */
+ bool irq_enabled;
+ bool rpm_wakelock;
+ } breadcrumbs;
+
/*
* A pool of objects to use as shadow copies of client batch buffers
* when the command parser is enabled. Prevents the client from
@@ -304,8 +330,6 @@ struct intel_engine_cs {

bool gpu_caches_dirty;

- wait_queue_head_t irq_queue;
-
struct intel_context *default_context;
struct intel_context *last_context;

@@ -511,4 +535,45 @@ void intel_ring_reserved_space_end(struct intel_ringbuffer *ringbuf);
/* Legacy ringbuffer specific portion of reservation code: */
int intel_ring_reserve_space(struct drm_i915_gem_request *request);

+/* intel_breadcrumbs.c -- user interrupt bottom-half for waiters */
+struct intel_wait {
+ struct rb_node node;
+ struct task_struct *task;
+ u32 seqno;
+};
+void intel_engine_init_breadcrumbs(struct intel_engine_cs *engine);
+static inline void intel_wait_init(struct intel_wait *wait, u32 seqno)
+{
+ wait->task = current;
+ wait->seqno = seqno;
+}
+static inline bool intel_wait_complete(const struct intel_wait *wait)
+{
+ return RB_EMPTY_NODE(&wait->node);
+}
+bool intel_engine_add_wait(struct intel_engine_cs *engine,
+ struct intel_wait *wait);
+bool intel_engine_enable_wait_irq(struct intel_engine_cs *engine,
+ const struct intel_wait *wait);
+void intel_engine_remove_wait(struct intel_engine_cs *engine,
+ struct intel_wait *wait);
+static inline bool intel_engine_has_waiter(struct intel_engine_cs *engine)
+{
+ return READ_ONCE(engine->breadcrumbs.first_waiter);
+}
+static inline bool intel_engine_wakeup(struct intel_engine_cs *engine)
+{
+ struct task_struct *task = READ_ONCE(engine->breadcrumbs.first_waiter);
+ /* Note that for this not to dangerously chase a dangling pointer,
+ * the caller is responsible for ensure that the task remain valid for
+ * wake_up_process() i.e. that the RCU grace period cannot expire.
+ */
+ if (task)
+ wake_up_process(task);
+ return task != NULL;
+}
+void intel_engine_enable_fake_irq(struct intel_engine_cs *engine);
+void intel_engine_fini_breadcrumbs(struct intel_engine_cs *engine);
+void intel_kick_waiters(struct drm_i915_private *i915);
+
#endif /* _INTEL_RINGBUFFER_H_ */
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:21 UTC
Permalink
Now that emitting requests is identical between legacy and execlists, we
can use the same function to build up the ring for submitting to either
engine. (With the exception of i915_switch_contexts(), but in time that
will also be handled gracefully.)

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_drv.h | 20 -----
drivers/gpu/drm/i915/i915_gem.c | 2 -
drivers/gpu/drm/i915/i915_gem_context.c | 3 +-
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 24 ++++--
drivers/gpu/drm/i915/intel_lrc.c | 129 -----------------------------
drivers/gpu/drm/i915/intel_lrc.h | 4 -
6 files changed, 20 insertions(+), 162 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 0c580124d46d..cae448e238ca 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1655,18 +1655,6 @@ struct i915_virtual_gpu {
bool active;
};

-struct i915_execbuffer_params {
- struct drm_device *dev;
- struct drm_file *file;
- uint32_t dispatch_flags;
- uint32_t args_batch_start_offset;
- uint64_t batch_obj_vm_offset;
- struct intel_engine_cs *ring;
- struct drm_i915_gem_object *batch_obj;
- struct intel_context *ctx;
- struct drm_i915_gem_request *request;
-};
-
/* used in computing the new watermarks state */
struct intel_wm_config {
unsigned int num_pipes_active;
@@ -1934,9 +1922,6 @@ struct drm_i915_private {

/* Abstract the submission mechanism (legacy ringbuffer or execlists) away */
struct {
- int (*execbuf_submit)(struct i915_execbuffer_params *params,
- struct drm_i915_gem_execbuffer2 *args,
- struct list_head *vmas);
int (*init_rings)(struct drm_device *dev);
void (*cleanup_ring)(struct intel_engine_cs *ring);
void (*stop_ring)(struct intel_engine_cs *ring);
@@ -2656,11 +2641,6 @@ int i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int i915_gem_sw_finish_ioctl(struct drm_device *dev, void *data,
struct drm_file *file_priv);
-void i915_gem_execbuffer_move_to_active(struct list_head *vmas,
- struct drm_i915_gem_request *req);
-int i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
- struct drm_i915_gem_execbuffer2 *args,
- struct list_head *vmas);
int i915_gem_execbuffer(struct drm_device *dev, void *data,
struct drm_file *file_priv);
int i915_gem_execbuffer2(struct drm_device *dev, void *data,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 5b5afdcd9634..235a3de6e0a0 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4308,12 +4308,10 @@ int i915_gem_init(struct drm_device *dev)
mutex_lock(&dev->struct_mutex);

if (!i915.enable_execlists) {
- dev_priv->gt.execbuf_submit = i915_gem_ringbuffer_submission;
dev_priv->gt.init_rings = i915_gem_init_rings;
dev_priv->gt.cleanup_ring = intel_engine_cleanup;
dev_priv->gt.stop_ring = intel_engine_stop;
} else {
- dev_priv->gt.execbuf_submit = intel_execlists_submission;
dev_priv->gt.init_rings = intel_logical_rings_init;
dev_priv->gt.cleanup_ring = intel_logical_ring_cleanup;
dev_priv->gt.stop_ring = intel_logical_ring_stop;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index c078ebc29da5..72b0875a95a4 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -819,8 +819,9 @@ unpin_out:
*/
int i915_switch_context(struct drm_i915_gem_request *req)
{
+ if (i915.enable_execlists)
+ return 0;

- WARN_ON(i915.enable_execlists);
WARN_ON(!mutex_is_locked(&req->i915->dev->struct_mutex));

if (req->ctx->legacy_hw_ctx.rcs_state == NULL) { /* We have the fake context */
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 3e6384deca65..6dee27224ddb 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -41,6 +41,18 @@

#define BATCH_OFFSET_BIAS (256*1024)

+struct i915_execbuffer_params {
+ struct drm_device *dev;
+ struct drm_file *file;
+ uint32_t dispatch_flags;
+ uint32_t args_batch_start_offset;
+ uint64_t batch_obj_vm_offset;
+ struct intel_engine_cs *ring;
+ struct drm_i915_gem_object *batch_obj;
+ struct intel_context *ctx;
+ struct drm_i915_gem_request *request;
+};
+
struct eb_vmas {
struct list_head vmas;
int and;
@@ -1093,7 +1105,7 @@ i915_gem_validate_context(struct drm_device *dev, struct drm_file *file,
return ctx;
}

-void
+static void
i915_gem_execbuffer_move_to_active(struct list_head *vmas,
struct drm_i915_gem_request *req)
{
@@ -1219,10 +1231,10 @@ err:
return ERR_PTR(ret);
}

-int
-i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
- struct drm_i915_gem_execbuffer2 *args,
- struct list_head *vmas)
+static int
+execbuf_submit(struct i915_execbuffer_params *params,
+ struct drm_i915_gem_execbuffer2 *args,
+ struct list_head *vmas)
{
struct intel_ring *ring = params->request->ring;
struct drm_i915_private *dev_priv = params->request->i915;
@@ -1620,7 +1632,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
params->batch_obj = batch_obj;
params->ctx = ctx;

- ret = dev_priv->gt.execbuf_submit(params, args, &eb->vmas);
+ ret = execbuf_submit(params, args, &eb->vmas);
i915_gem_execbuffer_retire_commands(params);

err_batch_unpin:
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 2f92c43397eb..84a8bcc90d78 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -616,39 +616,6 @@ static int execlists_context_queue(struct drm_i915_gem_request *request)
return 0;
}

-static int execlists_move_to_gpu(struct drm_i915_gem_request *req,
- struct list_head *vmas)
-{
- const unsigned other_rings = ~intel_engine_flag(req->engine);
- struct i915_vma *vma;
- uint32_t flush_domains = 0;
- bool flush_chipset = false;
- int ret;
-
- list_for_each_entry(vma, vmas, exec_list) {
- struct drm_i915_gem_object *obj = vma->obj;
-
- if (obj->active & other_rings) {
- ret = i915_gem_object_sync(obj, req);
- if (ret)
- return ret;
- }
-
- if (obj->base.write_domain & I915_GEM_DOMAIN_CPU)
- flush_chipset |= i915_gem_clflush_object(obj, false);
-
- flush_domains |= obj->base.write_domain;
- }
-
- if (flush_domains & I915_GEM_DOMAIN_GTT)
- wmb();
-
- /* Unconditionally invalidate gpu caches and ensure that we do flush
- * any residual writes from the previous batch.
- */
- return req->engine->emit_flush(req, I915_GEM_GPU_DOMAINS, 0);
-}
-
int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request)
{
int ret;
@@ -700,102 +667,6 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
execlists_context_queue(request);
}

-/**
- * execlists_submission() - submit a batchbuffer for execution, Execlists style
- * @dev: DRM device.
- * @file: DRM file.
- * @ring: Engine Command Streamer to submit to.
- * @ctx: Context to employ for this submission.
- * @args: execbuffer call arguments.
- * @vmas: list of vmas.
- * @batch_obj: the batchbuffer to submit.
- * @exec_start: batchbuffer start virtual address pointer.
- * @dispatch_flags: translated execbuffer call flags.
- *
- * This is the evil twin version of i915_gem_ringbuffer_submission. It abstracts
- * away the submission details of the execbuffer ioctl call.
- *
- * Return: non-zero if the submission fails.
- */
-int intel_execlists_submission(struct i915_execbuffer_params *params,
- struct drm_i915_gem_execbuffer2 *args,
- struct list_head *vmas)
-{
- struct drm_device *dev = params->dev;
- struct intel_engine_cs *engine = params->ring;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_ring *ring = params->request->ring;
- u64 exec_start;
- int instp_mode;
- u32 instp_mask;
- int ret;
-
- instp_mode = args->flags & I915_EXEC_CONSTANTS_MASK;
- instp_mask = I915_EXEC_CONSTANTS_MASK;
- switch (instp_mode) {
- case I915_EXEC_CONSTANTS_REL_GENERAL:
- case I915_EXEC_CONSTANTS_ABSOLUTE:
- case I915_EXEC_CONSTANTS_REL_SURFACE:
- if (instp_mode != 0 && engine->id != RCS) {
- DRM_DEBUG("non-0 rel constants mode on non-RCS\n");
- return -EINVAL;
- }
-
- if (instp_mode != dev_priv->relative_constants_mode) {
- if (instp_mode == I915_EXEC_CONSTANTS_REL_SURFACE) {
- DRM_DEBUG("rel surface constants mode invalid on gen5+\n");
- return -EINVAL;
- }
-
- /* The HW changed the meaning on this bit on gen6 */
- instp_mask &= ~I915_EXEC_CONSTANTS_REL_SURFACE;
- }
- break;
- default:
- DRM_DEBUG("execbuf with unknown constants: %d\n", instp_mode);
- return -EINVAL;
- }
-
- if (args->flags & I915_EXEC_GEN7_SOL_RESET) {
- DRM_DEBUG("sol reset is gen7 only\n");
- return -EINVAL;
- }
-
- ret = execlists_move_to_gpu(params->request, vmas);
- if (ret)
- return ret;
-
- if (engine->id == RCS &&
- instp_mode != dev_priv->relative_constants_mode) {
- ret = intel_ring_begin(params->request, 4);
- if (ret)
- return ret;
-
- intel_ring_emit(ring, MI_NOOP);
- intel_ring_emit(ring, MI_LOAD_REGISTER_IMM(1));
- intel_ring_emit_reg(ring, INSTPM);
- intel_ring_emit(ring, instp_mask << 16 | instp_mode);
- intel_ring_advance(ring);
-
- dev_priv->relative_constants_mode = instp_mode;
- }
-
- exec_start = params->batch_obj_vm_offset +
- args->batch_start_offset;
-
- ret = engine->emit_bb_start(params->request,
- exec_start, args->batch_len,
- params->dispatch_flags);
- if (ret)
- return ret;
-
- trace_i915_gem_ring_dispatch(params->request, params->dispatch_flags);
-
- i915_gem_execbuffer_move_to_active(vmas, params->request);
-
- return 0;
-}
-
void intel_execlists_retire_requests(struct intel_engine_cs *ring)
{
struct drm_i915_gem_request *req, *tmp;
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index 7f01d2ddacfa..87bc9acc4224 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -79,10 +79,6 @@ uint64_t intel_lr_context_descriptor(struct intel_context *ctx,

/* Execlists */
int intel_sanitize_enable_execlists(struct drm_device *dev, int enable_execlists);
-struct i915_execbuffer_params;
-int intel_execlists_submission(struct i915_execbuffer_params *params,
- struct drm_i915_gem_execbuffer2 *args,
- struct list_head *vmas);
u32 intel_execlists_ctx_id(struct drm_i915_gem_object *ctx_obj);

void intel_lrc_irq_handler(struct intel_engine_cs *ring);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:34 UTC
Permalink
When we call i915_vma_unbind(), we will wait upon outstanding rendering.
This will also trigger a retirement phase, which may update the object
lists. If, we extend request tracking to the VMA itself (rather than
keep it at the encompassing object), then there is a potential that the
obj->vma_list be modified for other elements upon i915_vma_unbind(). As
a result, if we walk over the object list and call i915_vma_unbind(), we
need to be prepared for that list to change.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_drv.h | 2 ++
drivers/gpu/drm/i915/i915_gem.c | 54 ++++++++++++++++++++++++--------
drivers/gpu/drm/i915/i915_gem_shrinker.c | 6 +---
drivers/gpu/drm/i915/i915_gem_userptr.c | 4 +--
4 files changed, 45 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 8f5cf244094e..9fa925389332 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2707,6 +2707,8 @@ int __must_check i915_vma_unbind(struct i915_vma *vma);
* _guarantee_ VMA in question is _not in use_ anywhere.
*/
int __must_check __i915_vma_unbind_no_wait(struct i915_vma *vma);
+
+int i915_gem_object_unbind(struct drm_i915_gem_object *obj);
int i915_gem_object_put_pages(struct drm_i915_gem_object *obj);
void i915_gem_release_all_mmaps(struct drm_i915_private *dev_priv);
void i915_gem_release_mmap(struct drm_i915_gem_object *obj);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index ed3f306af42f..95e69dc47fc8 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -254,18 +254,38 @@ static const struct drm_i915_gem_object_ops i915_gem_phys_ops = {
.release = i915_gem_object_release_phys,
};

+int
+i915_gem_object_unbind(struct drm_i915_gem_object *obj)
+{
+ struct list_head still_in_list;
+
+ INIT_LIST_HEAD(&still_in_list);
+ while (!list_empty(&obj->vma_list)) {
+ struct i915_vma *vma =
+ list_first_entry(&obj->vma_list,
+ struct i915_vma,
+ obj_link);
+ int ret;
+
+ list_move_tail(&vma->obj_link, &still_in_list);
+ ret = i915_vma_unbind(vma);
+ if (ret)
+ break;
+ }
+ list_splice(&still_in_list, &obj->vma_list);
+
+ return 0;
+}
+
static int
drop_pages(struct drm_i915_gem_object *obj)
{
- struct i915_vma *vma, *next;
int ret;

drm_gem_object_reference(&obj->base);
- list_for_each_entry_safe(vma, next, &obj->vma_list, obj_link)
- if (i915_vma_unbind(vma))
- break;
-
- ret = i915_gem_object_put_pages(obj);
+ ret = i915_gem_object_unbind(obj);
+ if (ret == 0)
+ ret = i915_gem_object_put_pages(obj);
drm_gem_object_unreference(&obj->base);

return ret;
@@ -3038,7 +3058,7 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
enum i915_cache_level cache_level)
{
struct drm_device *dev = obj->base.dev;
- struct i915_vma *vma, *next;
+ struct i915_vma *vma;
int ret = 0;

if (obj->cache_level == cache_level)
@@ -3049,7 +3069,8 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
* catch the issue of the CS prefetch crossing page boundaries and
* reading an invalid PTE on older architectures.
*/
- list_for_each_entry_safe(vma, next, &obj->vma_list, obj_link) {
+restart:
+ list_for_each_entry(vma, &obj->vma_list, obj_link) {
if (!drm_mm_node_allocated(&vma->node))
continue;

@@ -3058,11 +3079,18 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
return -EBUSY;
}

- if (!i915_gem_valid_gtt_space(vma, cache_level)) {
- ret = i915_vma_unbind(vma);
- if (ret)
- return ret;
- }
+ if (i915_gem_valid_gtt_space(vma, cache_level))
+ continue;
+
+ ret = i915_vma_unbind(vma);
+ if (ret)
+ return ret;
+
+ /* As unbinding may affect other elements in the
+ * obj->vma_list (due to side-effects from retiring
+ * an active vma), play safe and restart the iterator.
+ */
+ goto restart;
}

/* We can reuse the existing drm_mm nodes but need to change the
diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c
index fa190ef3f727..e15fc7531f08 100644
--- a/drivers/gpu/drm/i915/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c
@@ -141,7 +141,6 @@ i915_gem_shrink(struct drm_i915_private *dev_priv,
INIT_LIST_HEAD(&still_in_list);
while (count < target && !list_empty(phase->list)) {
struct drm_i915_gem_object *obj;
- struct i915_vma *vma, *v;

obj = list_first_entry(phase->list,
typeof(*obj), global_list);
@@ -160,10 +159,7 @@ i915_gem_shrink(struct drm_i915_private *dev_priv,
drm_gem_object_reference(&obj->base);

/* For the unbound phase, this should be a no-op! */
- list_for_each_entry_safe(vma, v,
- &obj->vma_list, obj_link)
- if (i915_vma_unbind(vma))
- break;
+ i915_gem_object_unbind(obj);

if (i915_gem_object_put_pages(obj) == 0)
count += obj->base.size >> PAGE_SHIFT;
diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index 2f3638d02bdd..a90392246471 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -75,14 +75,12 @@ static void __cancel_userptr__worker(struct work_struct *work)

if (obj->pages != NULL) {
struct drm_i915_private *dev_priv = to_i915(dev);
- struct i915_vma *vma, *tmp;
bool was_interruptible;

was_interruptible = dev_priv->mm.interruptible;
dev_priv->mm.interruptible = false;

- list_for_each_entry_safe(vma, tmp, &obj->vma_list, obj_link)
- WARN_ON(i915_vma_unbind(vma));
+ WARN_ON(i915_gem_object_unbind(obj));
WARN_ON(i915_gem_object_put_pages(obj));

dev_priv->mm.interruptible = was_interruptible;
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:08 UTC
Permalink
Now that we share intel_ring_begin(), reserving space for the tail of
the request is identical between legacy/execlists and so the tautology
can be removed.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem_request.c | 7 +++----
drivers/gpu/drm/i915/intel_lrc.c | 15 ---------------
drivers/gpu/drm/i915/intel_lrc.h | 1 -
drivers/gpu/drm/i915/intel_ringbuffer.c | 15 ---------------
drivers/gpu/drm/i915/intel_ringbuffer.h | 3 ---
5 files changed, 3 insertions(+), 38 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 619a9b063d9c..85067069995e 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -255,10 +255,9 @@ int i915_gem_request_alloc(struct intel_engine_cs *ring,
* to be redone if the request is not actually submitted straight
* away, e.g. because a GPU scheduler has deferred it.
*/
- if (i915.enable_execlists)
- ret = intel_logical_ring_reserve_space(req);
- else
- ret = intel_ring_reserve_space(req);
+ intel_ring_reserved_space_reserve(req->ringbuf,
+ MIN_SPACE_FOR_ADD_REQUEST);
+ ret = intel_ring_begin(req, 0);
if (ret) {
/*
* At this point, the request is fully allocated even if not
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 3d14b69632e8..4f1944929330 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -721,21 +721,6 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
execlists_context_queue(request);
}

-int intel_logical_ring_reserve_space(struct drm_i915_gem_request *request)
-{
- /*
- * The first call merely notes the reserve request and is common for
- * all back ends. The subsequent localised _begin() call actually
- * ensures that the reservation is available. Without the begin, if
- * the request creator immediately submitted the request without
- * adding any commands to it then there might not actually be
- * sufficient room for the submission commands.
- */
- intel_ring_reserved_space_reserve(request->ringbuf, MIN_SPACE_FOR_ADD_REQUEST);
-
- return intel_ring_begin(request, 0);
-}
-
/**
* execlists_submission() - submit a batchbuffer for execution, Execlists style
* @dev: DRM device.
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index 32401e11cebe..c88988a41898 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -56,7 +56,6 @@

/* Logical Rings */
int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request);
-int intel_logical_ring_reserve_space(struct drm_i915_gem_request *request);
void intel_logical_ring_stop(struct intel_engine_cs *ring);
void intel_logical_ring_cleanup(struct intel_engine_cs *ring);
int intel_logical_rings_init(struct drm_device *dev);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index c694f602a0b8..db5c407f7720 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -2086,21 +2086,6 @@ int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request)
return 0;
}

-int intel_ring_reserve_space(struct drm_i915_gem_request *request)
-{
- /*
- * The first call merely notes the reserve request and is common for
- * all back ends. The subsequent localised _begin() call actually
- * ensures that the reservation is available. Without the begin, if
- * the request creator immediately submitted the request without
- * adding any commands to it then there might not actually be
- * sufficient room for the submission commands.
- */
- intel_ring_reserved_space_reserve(request->ringbuf, MIN_SPACE_FOR_ADD_REQUEST);
-
- return intel_ring_begin(request, 0);
-}
-
void intel_ring_reserved_space_reserve(struct intel_ringbuffer *ringbuf, int size)
{
WARN_ON(ringbuf->reserved_size);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 9c19a6ca8e7d..bc6ceb54b1f3 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -536,9 +536,6 @@ void intel_ring_reserved_space_use(struct intel_ringbuffer *ringbuf);
/* Finish with the reserved space - for use by i915_add_request() only. */
void intel_ring_reserved_space_end(struct intel_ringbuffer *ringbuf);

-/* Legacy ringbuffer specific portion of reservation code: */
-int intel_ring_reserve_space(struct drm_i915_gem_request *request);
-
/* intel_breadcrumbs.c -- user interrupt bottom-half for waiters */
struct intel_wait {
struct rb_node node;
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:57 UTC
Permalink
dma-buf provides a generic fence class for interoperation between
drivers. Internally we use the request structure as a fence, and so with
only a little bit of interfacing we can rebase those requests on top of
dma-buf fences. This will allow us, in the future, to pass those fences
back to userspace or between drivers.

v2: The fence_context needs to be globally unique, not just unique to
this device.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: Jesse Barnes <***@virtuousgeek.org>
Cc: Daniel Vetter <***@ffwll.ch>
---
drivers/gpu/drm/i915/i915_debugfs.c | 2 +-
drivers/gpu/drm/i915/i915_gem_request.c | 111 +++++++++++++++++++++++++----
drivers/gpu/drm/i915/i915_gem_request.h | 33 ++++-----
drivers/gpu/drm/i915/i915_gpu_error.c | 2 +-
drivers/gpu/drm/i915/i915_guc_submission.c | 2 +-
drivers/gpu/drm/i915/i915_trace.h | 2 +-
drivers/gpu/drm/i915/intel_breadcrumbs.c | 3 +-
drivers/gpu/drm/i915/intel_lrc.c | 3 +-
drivers/gpu/drm/i915/intel_ringbuffer.c | 15 ++--
drivers/gpu/drm/i915/intel_ringbuffer.h | 1 +
10 files changed, 133 insertions(+), 41 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 6172649b7e56..b82482573a8f 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -710,7 +710,7 @@ static int i915_gem_request_info(struct seq_file *m, void *data)
if (req->pid)
task = pid_task(req->pid, PIDTYPE_PID);
seq_printf(m, " %x @ %d: %s [%d]\n",
- req->seqno,
+ req->fence.seqno,
(int) (jiffies - req->emitted_jiffies),
task ? task->comm : "<unknown>",
task ? task->pid : -1);
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 1c4f4d83a3c2..e366ca0dcd99 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -24,6 +24,92 @@

#include "i915_drv.h"

+static inline struct drm_i915_gem_request *
+to_i915_request(struct fence *fence)
+{
+ return container_of(fence, struct drm_i915_gem_request, fence);
+}
+
+static const char *i915_fence_get_driver_name(struct fence *fence)
+{
+ return "i915";
+}
+
+static const char *i915_fence_get_timeline_name(struct fence *fence)
+{
+ return to_i915_request(fence)->ring->name;
+}
+
+static bool i915_fence_signaled(struct fence *fence)
+{
+ return i915_gem_request_completed(to_i915_request(fence));
+}
+
+static bool i915_fence_enable_signaling(struct fence *fence)
+{
+ if (i915_fence_signaled(fence))
+ return false;
+
+ return intel_engine_enable_signaling(to_i915_request(fence)) == 0;
+}
+
+static signed long i915_fence_wait(struct fence *fence,
+ bool interruptible,
+ signed long timeout_jiffies)
+{
+ s64 timeout_ns, *timeout;
+ int ret;
+
+ if (timeout_jiffies != MAX_SCHEDULE_TIMEOUT) {
+ timeout_ns = jiffies_to_nsecs(timeout_jiffies);
+ timeout = &timeout_ns;
+ } else
+ timeout = NULL;
+
+ ret = __i915_wait_request(to_i915_request(fence),
+ interruptible, timeout,
+ NULL);
+ if (ret == -ETIME)
+ return 0;
+
+ if (ret < 0)
+ return ret;
+
+ if (timeout_jiffies != MAX_SCHEDULE_TIMEOUT)
+ timeout_jiffies = nsecs_to_jiffies(timeout_ns);
+
+ return timeout_jiffies;
+}
+
+static void i915_fence_value_str(struct fence *fence, char *str, int size)
+{
+ snprintf(str, size, "%u", fence->seqno);
+}
+
+static void i915_fence_timeline_value_str(struct fence *fence, char *str,
+ int size)
+{
+ snprintf(str, size, "%u",
+ intel_ring_get_seqno(to_i915_request(fence)->ring));
+}
+
+static void i915_fence_release(struct fence *fence)
+{
+ struct drm_i915_gem_request *req = to_i915_request(fence);
+ kmem_cache_free(req->i915->requests, req);
+}
+
+static const struct fence_ops i915_fence_ops = {
+ .get_driver_name = i915_fence_get_driver_name,
+ .get_timeline_name = i915_fence_get_timeline_name,
+ .enable_signaling = i915_fence_enable_signaling,
+ .signaled = i915_fence_signaled,
+ .wait = i915_fence_wait,
+ .release = i915_fence_release,
+ .fence_value_str = i915_fence_value_str,
+ .timeline_value_str = i915_fence_timeline_value_str,
+};
+
static int
i915_gem_check_wedge(unsigned reset_counter, bool interruptible)
{
@@ -116,6 +202,7 @@ int i915_gem_request_alloc(struct intel_engine_cs *ring,
struct drm_i915_private *dev_priv = to_i915(ring->dev);
unsigned reset_counter = i915_reset_counter(&dev_priv->gpu_error);
struct drm_i915_gem_request *req;
+ u32 seqno;
int ret;

if (!req_out)
@@ -135,11 +222,17 @@ int i915_gem_request_alloc(struct intel_engine_cs *ring,
if (req == NULL)
return -ENOMEM;

- ret = i915_gem_get_seqno(dev_priv, &req->seqno);
+ ret = i915_gem_get_seqno(dev_priv, &seqno);
if (ret)
goto err;

- kref_init(&req->ref);
+ spin_lock_init(&req->lock);
+ fence_init(&req->fence,
+ &i915_fence_ops,
+ &req->lock,
+ ring->fence_context,
+ seqno);
+
req->i915 = dev_priv;
req->ring = ring;
req->reset_counter = reset_counter;
@@ -377,7 +470,7 @@ void __i915_add_request(struct drm_i915_gem_request *request,

request->emitted_jiffies = jiffies;
request->previous_seqno = ring->last_submitted_seqno;
- ring->last_submitted_seqno = request->seqno;
+ ring->last_submitted_seqno = request->fence.seqno;
list_add_tail(&request->list, &ring->request_list);

trace_i915_gem_request_add(request);
@@ -531,7 +624,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req,
if (INTEL_INFO(req->i915)->gen >= 6)
gen6_rps_boost(req->i915, rps, req->emitted_jiffies);

- intel_wait_init(&wait, req->seqno);
+ intel_wait_init(&wait, req->fence.seqno);
set_task_state(wait.task, state);

/* Optimistic spin for the next ~jiffie before touching IRQs */
@@ -598,7 +691,8 @@ complete:
*timeout = 0;
}

- if (ret == 0 && rps && req->seqno == req->ring->last_submitted_seqno) {
+ if (ret == 0 && rps &&
+ req->fence.seqno == req->ring->last_submitted_seqno) {
/* The GPU is now idle and this client has stalled.
* Since no other client has submitted a request in the
* meantime, assume that this client is the only one
@@ -644,10 +738,3 @@ i915_wait_request(struct drm_i915_gem_request *req)
i915_gem_request_retire_upto(req);
return 0;
}
-
-void i915_gem_request_free(struct kref *req_ref)
-{
- struct drm_i915_gem_request *req =
- container_of(req_ref, typeof(*req), ref);
- kmem_cache_free(req->i915->requests, req);
-}
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index af1b825fce50..b55d0b7c7f2a 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -25,6 +25,8 @@
#ifndef I915_GEM_REQUEST_H
#define I915_GEM_REQUEST_H

+#include <linux/fence.h>
+
/**
* Request queue structure.
*
@@ -36,11 +38,11 @@
* emission time to be associated with the request for tracking how far ahead
* of the GPU the submission is.
*
- * The requests are reference counted, so upon creation they should have an
- * initial reference taken using kref_init
+ * The requests are reference counted.
*/
struct drm_i915_gem_request {
- struct kref ref;
+ struct fence fence;
+ spinlock_t lock;

/** On Which ring this request was generated */
struct drm_i915_private *i915;
@@ -53,12 +55,6 @@ struct drm_i915_gem_request {
*/
u32 previous_seqno;

- /** GEM sequence number associated with this request,
- * when the HWS breadcrumb is equal or greater than this the GPU
- * has finished processing this request.
- */
- u32 seqno;
-
/** Position in the ringbuffer of the start of the request */
u32 head;

@@ -126,7 +122,6 @@ int i915_gem_request_alloc(struct intel_engine_cs *ring,
struct intel_context *ctx,
struct drm_i915_gem_request **req_out);
void i915_gem_request_cancel(struct drm_i915_gem_request *req);
-void i915_gem_request_free(struct kref *req_ref);
int i915_gem_request_add_to_client(struct drm_i915_gem_request *req,
struct drm_file *file);
void i915_gem_request_retire_upto(struct drm_i915_gem_request *req);
@@ -134,7 +129,7 @@ void i915_gem_request_retire_upto(struct drm_i915_gem_request *req);
static inline uint32_t
i915_gem_request_get_seqno(struct drm_i915_gem_request *req)
{
- return req ? req->seqno : 0;
+ return req ? req->fence.seqno : 0;
}

static inline struct intel_engine_cs *
@@ -144,17 +139,23 @@ i915_gem_request_get_ring(struct drm_i915_gem_request *req)
}

static inline struct drm_i915_gem_request *
+to_request(struct fence *fence)
+{
+ /* We assume that NULL fence/request are interoperable */
+ BUILD_BUG_ON(offsetof(struct drm_i915_gem_request, fence) != 0);
+ return container_of(fence, struct drm_i915_gem_request, fence);
+}
+
+static inline struct drm_i915_gem_request *
i915_gem_request_reference(struct drm_i915_gem_request *req)
{
- if (req)
- kref_get(&req->ref);
- return req;
+ return to_request(fence_get(&req->fence));
}

static inline void
i915_gem_request_unreference(struct drm_i915_gem_request *req)
{
- kref_put(&req->ref, i915_gem_request_free);
+ fence_put(&req->fence);
}

static inline void i915_gem_request_assign(struct drm_i915_gem_request **pdst,
@@ -203,7 +204,7 @@ static inline bool i915_gem_request_started(struct drm_i915_gem_request *req)
static inline bool i915_gem_request_completed(struct drm_i915_gem_request *req)
{
return i915_seqno_passed(intel_ring_get_seqno(req->ring),
- req->seqno);
+ req->fence.seqno);
}

#endif /* I915_GEM_REQUEST_H */
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 86f582115313..05f054898a95 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1092,7 +1092,7 @@ static void i915_gem_record_rings(struct drm_device *dev,
}

erq = &error->ring[i].requests[count++];
- erq->seqno = request->seqno;
+ erq->seqno = request->fence.seqno;
erq->jiffies = request->emitted_jiffies;
erq->tail = request->postfix;
}
diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
index 9c244247c13e..56d3064d32ed 100644
--- a/drivers/gpu/drm/i915/i915_guc_submission.c
+++ b/drivers/gpu/drm/i915/i915_guc_submission.c
@@ -616,7 +616,7 @@ int i915_guc_submit(struct i915_guc_client *client,
client->retcode = 0;
}
guc->submissions[ring_id] += 1;
- guc->last_seqno[ring_id] = rq->seqno;
+ guc->last_seqno[ring_id] = rq->fence.seqno;

return q_ret;
}
diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
index 43bb2e0bb949..dc2ff5cac2f4 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -503,7 +503,7 @@ TRACE_EVENT(i915_gem_ring_dispatch,
__entry->ring = ring->id;
__entry->seqno = i915_gem_request_get_seqno(req);
__entry->flags = flags;
- intel_engine_enable_signaling(req);
+ fence_enable_sw_signaling(&req->fence);
),

TP_printk("dev=%u, ring=%u, seqno=%u, flags=%x",
diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
index f6731aac7fcf..61e18cb90850 100644
--- a/drivers/gpu/drm/i915/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
@@ -390,6 +390,7 @@ static int intel_breadcrumbs_signaler(void *arg)
*/
intel_engine_remove_wait(engine, &signal->wait);

+ fence_signal(&signal->request->fence);
i915_gem_request_unreference(signal->request);

/* Find the next oldest signal. Note that as we have
@@ -456,7 +457,7 @@ int intel_engine_enable_signaling(struct drm_i915_gem_request *request)
}

signal->wait.task = task;
- signal->wait.seqno = request->seqno;
+ signal->wait.seqno = request->fence.seqno;

signal->request = i915_gem_request_reference(request);

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 7a3069a2beb2..f43a94ae5c76 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1798,7 +1798,7 @@ static int gen8_emit_request(struct drm_i915_gem_request *request)
(ring->status_page.gfx_addr +
(I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT)));
intel_logical_ring_emit(ringbuf, 0);
- intel_logical_ring_emit(ringbuf, request->seqno);
+ intel_logical_ring_emit(ringbuf, request->fence.seqno);
intel_logical_ring_emit(ringbuf, MI_USER_INTERRUPT);
intel_logical_ring_emit(ringbuf, MI_NOOP);
intel_logical_ring_advance_and_submit(request);
@@ -1909,6 +1909,7 @@ static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *rin

ring->dev = dev;
ring->i915 = to_i915(dev);
+ ring->fence_context = fence_context_alloc(1);
INIT_LIST_HEAD(&ring->active_list);
INIT_LIST_HEAD(&ring->request_list);
i915_gem_batch_pool_init(dev, &ring->batch_pool);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index d9bb6458fa60..e8a7a1045c06 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1218,7 +1218,7 @@ static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
PIPE_CONTROL_FLUSH_ENABLE);
intel_ring_emit(signaller, lower_32_bits(gtt_offset));
intel_ring_emit(signaller, upper_32_bits(gtt_offset));
- intel_ring_emit(signaller, signaller_req->seqno);
+ intel_ring_emit(signaller, signaller_req->fence.seqno);
intel_ring_emit(signaller, 0);
intel_ring_emit(signaller, MI_SEMAPHORE_SIGNAL |
MI_SEMAPHORE_TARGET(waiter->id));
@@ -1256,7 +1256,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
intel_ring_emit(signaller, lower_32_bits(gtt_offset) |
MI_FLUSH_DW_USE_GTT);
intel_ring_emit(signaller, upper_32_bits(gtt_offset));
- intel_ring_emit(signaller, signaller_req->seqno);
+ intel_ring_emit(signaller, signaller_req->fence.seqno);
intel_ring_emit(signaller, MI_SEMAPHORE_SIGNAL |
MI_SEMAPHORE_TARGET(waiter->id));
intel_ring_emit(signaller, 0);
@@ -1289,7 +1289,7 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
if (i915_mmio_reg_valid(mbox_reg)) {
intel_ring_emit(signaller, MI_LOAD_REGISTER_IMM(1));
intel_ring_emit_reg(signaller, mbox_reg);
- intel_ring_emit(signaller, signaller_req->seqno);
+ intel_ring_emit(signaller, signaller_req->fence.seqno);
}
}

@@ -1324,7 +1324,7 @@ gen6_add_request(struct drm_i915_gem_request *req)

intel_ring_emit(ring, MI_STORE_DWORD_INDEX);
intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
- intel_ring_emit(ring, req->seqno);
+ intel_ring_emit(ring, req->fence.seqno);
intel_ring_emit(ring, MI_USER_INTERRUPT);
__intel_ring_advance(ring);

@@ -1448,7 +1448,7 @@ pc_render_add_request(struct drm_i915_gem_request *req)
PIPE_CONTROL_QW_WRITE |
PIPE_CONTROL_WRITE_FLUSH);
intel_ring_emit(ring, addr | PIPE_CONTROL_GLOBAL_GTT);
- intel_ring_emit(ring, req->seqno);
+ intel_ring_emit(ring, req->fence.seqno);
intel_ring_emit(ring, 0);
PIPE_CONTROL_FLUSH(ring, scratch_addr);
scratch_addr += 2 * CACHELINE_BYTES; /* write to separate cachelines */
@@ -1467,7 +1467,7 @@ pc_render_add_request(struct drm_i915_gem_request *req)
PIPE_CONTROL_WRITE_FLUSH |
PIPE_CONTROL_NOTIFY);
intel_ring_emit(ring, addr | PIPE_CONTROL_GLOBAL_GTT);
- intel_ring_emit(ring, req->seqno);
+ intel_ring_emit(ring, req->fence.seqno);
intel_ring_emit(ring, 0);
__intel_ring_advance(ring);

@@ -1577,7 +1577,7 @@ i9xx_add_request(struct drm_i915_gem_request *req)

intel_ring_emit(ring, MI_STORE_DWORD_INDEX);
intel_ring_emit(ring, I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
- intel_ring_emit(ring, req->seqno);
+ intel_ring_emit(ring, req->fence.seqno);
intel_ring_emit(ring, MI_USER_INTERRUPT);
__intel_ring_advance(ring);

@@ -2010,6 +2010,7 @@ static int intel_init_ring_buffer(struct drm_device *dev,

ring->dev = dev;
ring->i915 = to_i915(dev);
+ ring->fence_context = fence_context_alloc(1);
INIT_LIST_HEAD(&ring->active_list);
INIT_LIST_HEAD(&ring->request_list);
INIT_LIST_HEAD(&ring->execlist_queue);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index eecf9c7ae2b8..a1fcb6c7501f 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -159,6 +159,7 @@ struct intel_engine_cs {
} id;
#define I915_NUM_RINGS 5
#define LAST_USER_RING (VECS + 1)
+ unsigned fence_context;
u32 mmio_base;
struct drm_device *dev;
struct drm_i915_private *i915;
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:26 UTC
Permalink
With the introduction of requests, we amplified the number of atomic
refcounted objects we use and update every execbuffer; from none to
several references, and a set of references that need to be changed. We
also introduced interesting side-effects in the order of retiring
requests and objects.

Instead of independently tracking the last request for an object, track
the active objects for each request. The object will reside in the
buffer list of its most recent active request and so we reduce the kref
interchange to a list_move. Now retirements are entirely driven by the
request, dramatically simplifying activity tracking on the object
themselves, and removing the ambiguity between retiring objects and
retiring requests.

All told, less code, simpler and faster, and more extensible.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/Makefile | 1 -
drivers/gpu/drm/i915/i915_drv.h | 10 --
drivers/gpu/drm/i915/i915_gem.c | 160 ++++++++------------------------
drivers/gpu/drm/i915/i915_gem_debug.c | 70 --------------
drivers/gpu/drm/i915/i915_gem_fence.c | 10 +-
drivers/gpu/drm/i915/i915_gem_request.c | 44 +++++++--
drivers/gpu/drm/i915/i915_gem_request.h | 16 +++-
drivers/gpu/drm/i915/intel_lrc.c | 1 -
drivers/gpu/drm/i915/intel_ringbuffer.c | 1 -
drivers/gpu/drm/i915/intel_ringbuffer.h | 12 ---
10 files changed, 89 insertions(+), 236 deletions(-)
delete mode 100644 drivers/gpu/drm/i915/i915_gem_debug.c

diff --git a/drivers/gpu/drm/i915/Makefile b/drivers/gpu/drm/i915/Makefile
index b0a83215db80..79d657f29241 100644
--- a/drivers/gpu/drm/i915/Makefile
+++ b/drivers/gpu/drm/i915/Makefile
@@ -23,7 +23,6 @@ i915-$(CONFIG_DEBUG_FS) += i915_debugfs.o
i915-y += i915_cmd_parser.o \
i915_gem_batch_pool.o \
i915_gem_context.o \
- i915_gem_debug.o \
i915_gem_dmabuf.o \
i915_gem_evict.o \
i915_gem_execbuffer.o \
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index c577f86d94f8..c9c1a5cdc1e5 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -435,8 +435,6 @@ void intel_link_compute_m_n(int bpp, int nlanes,
#define DRIVER_MINOR 6
#define DRIVER_PATCHLEVEL 0

-#define WATCH_LISTS 0
-
struct opregion_header;
struct opregion_acpi;
struct opregion_swsci;
@@ -2024,7 +2022,6 @@ struct drm_i915_gem_object {
struct drm_mm_node *stolen;
struct list_head global_list;

- struct list_head ring_list[I915_NUM_RINGS];
/** Used in execbuf to temporarily hold a ref */
struct list_head obj_exec_link;

@@ -3068,13 +3065,6 @@ static inline bool i915_gem_object_needs_bit17_swizzle(struct drm_i915_gem_objec
obj->tiling_mode != I915_TILING_NONE;
}

-/* i915_gem_debug.c */
-#if WATCH_LISTS
-int i915_verify_lists(struct drm_device *dev);
-#else
-#define i915_verify_lists(dev) 0
-#endif
-
/* i915_debugfs.c */
int i915_debugfs_init(struct drm_minor *minor);
void i915_debugfs_cleanup(struct drm_minor *minor);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index f314b3ea2726..4eef13ebdaf3 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -40,10 +40,6 @@

static void i915_gem_object_flush_gtt_write_domain(struct drm_i915_gem_object *obj);
static void i915_gem_object_flush_cpu_write_domain(struct drm_i915_gem_object *obj);
-static void
-i915_gem_object_retire__write(struct drm_i915_gem_object *obj);
-static void
-i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring);

static bool cpu_cache_is_coherent(struct drm_device *dev,
enum i915_cache_level level)
@@ -117,7 +113,6 @@ int i915_mutex_lock_interruptible(struct drm_device *dev)
if (ret)
return ret;

- WARN_ON(i915_verify_lists(dev));
return 0;
}

@@ -1117,27 +1112,14 @@ i915_gem_object_wait_rendering(struct drm_i915_gem_object *obj,
return 0;

if (readonly) {
- if (obj->last_write.request != NULL) {
- ret = i915_wait_request(obj->last_write.request);
- if (ret)
- return ret;
-
- i = obj->last_write.request->engine->id;
- if (obj->last_read[i].request == obj->last_write.request)
- i915_gem_object_retire__read(obj, i);
- else
- i915_gem_object_retire__write(obj);
- }
+ ret = i915_wait_request(obj->last_write.request);
+ if (ret)
+ return ret;
} else {
for (i = 0; i < I915_NUM_RINGS; i++) {
- if (obj->last_read[i].request == NULL)
- continue;
-
ret = i915_wait_request(obj->last_read[i].request);
if (ret)
return ret;
-
- i915_gem_object_retire__read(obj, i);
}
GEM_BUG_ON(obj->active);
}
@@ -1145,20 +1127,6 @@ i915_gem_object_wait_rendering(struct drm_i915_gem_object *obj,
return 0;
}

-static void
-i915_gem_object_retire_request(struct drm_i915_gem_object *obj,
- struct drm_i915_gem_request *req)
-{
- int ring = req->engine->id;
-
- if (obj->last_read[ring].request == req)
- i915_gem_object_retire__read(obj, ring);
- else if (obj->last_write.request == req)
- i915_gem_object_retire__write(obj);
-
- i915_gem_request_retire_upto(req);
-}
-
/* A nonblocking variant of the above wait. This is a highly dangerous routine
* as the object state may change during this call.
*/
@@ -1206,7 +1174,7 @@ i915_gem_object_wait_rendering__nonblocking(struct drm_i915_gem_object *obj,

for (i = 0; i < n; i++) {
if (ret == 0)
- i915_gem_object_retire_request(obj, requests[i]);
+ i915_gem_request_retire_upto(requests[i]);
i915_gem_request_put(requests[i]);
}

@@ -2069,35 +2037,37 @@ void i915_vma_move_to_active(struct i915_vma *vma,
drm_gem_object_reference(&obj->base);
obj->active |= intel_engine_flag(engine);

- list_move_tail(&obj->ring_list[engine->id], &engine->active_list);
i915_gem_request_mark_active(req, &obj->last_read[engine->id]);
-
list_move_tail(&vma->mm_list, &vma->vm->active_list);
}

static void
-i915_gem_object_retire__write(struct drm_i915_gem_object *obj)
+i915_gem_object_retire__fence(struct i915_gem_active *active,
+ struct drm_i915_gem_request *req)
{
- GEM_BUG_ON(obj->last_write.request == NULL);
- GEM_BUG_ON(!(obj->active & intel_engine_flag(obj->last_write.request->engine)));
+}

- i915_gem_request_assign(&obj->last_write.request, NULL);
- intel_fb_obj_flush(obj, true, ORIGIN_CS);
+static void
+i915_gem_object_retire__write(struct i915_gem_active *active,
+ struct drm_i915_gem_request *request)
+{
+ intel_fb_obj_flush(container_of(active,
+ struct drm_i915_gem_object,
+ last_write),
+ true,
+ ORIGIN_CS);
}

static void
-i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring)
+i915_gem_object_retire__read(struct i915_gem_active *active,
+ struct drm_i915_gem_request *request)
{
+ int ring = request->engine->id;
+ struct drm_i915_gem_object *obj =
+ container_of(active, struct drm_i915_gem_object, last_read[ring]);
struct i915_vma *vma;

- GEM_BUG_ON(obj->last_read[ring].request == NULL);
- GEM_BUG_ON(!(obj->active & (1 << ring)));
-
- list_del_init(&obj->ring_list[ring]);
- i915_gem_request_assign(&obj->last_read[ring].request, NULL);
-
- if (obj->last_write.request && obj->last_write.request->engine->id == ring)
- i915_gem_object_retire__write(obj);
+ GEM_BUG_ON((obj->active & (1 << ring)) == 0);

obj->active &= ~(1 << ring);
if (obj->active)
@@ -2107,15 +2077,13 @@ i915_gem_object_retire__read(struct drm_i915_gem_object *obj, int ring)
* so that we don't steal from recently used but inactive objects
* (unless we are forced to ofc!)
*/
- list_move_tail(&obj->global_list,
- &to_i915(obj->base.dev)->mm.bound_list);
+ list_move_tail(&obj->global_list, &request->i915->mm.bound_list);

list_for_each_entry(vma, &obj->vma_list, vma_link) {
if (!list_empty(&vma->mm_list))
list_move_tail(&vma->mm_list, &vma->vm->inactive_list);
}

- i915_gem_request_assign(&obj->last_fence.request, NULL);
drm_gem_object_unreference(&obj->base);
}

@@ -2216,16 +2184,6 @@ static void i915_gem_reset_ring_cleanup(struct intel_engine_cs *engine)
{
struct intel_ring *ring;

- while (!list_empty(&engine->active_list)) {
- struct drm_i915_gem_object *obj;
-
- obj = list_first_entry(&engine->active_list,
- struct drm_i915_gem_object,
- ring_list[engine->id]);
-
- i915_gem_object_retire__read(obj, engine->id);
- }
-
/*
* Clear the execlists queue up before freeing the requests, as those
* are the ones that keep the context and ringbuffer backing objects
@@ -2295,8 +2253,6 @@ void i915_gem_reset(struct drm_device *dev)
i915_gem_context_reset(dev);

i915_gem_restore_fences(dev);
-
- WARN_ON(i915_verify_lists(dev));
}

/**
@@ -2305,13 +2261,6 @@ void i915_gem_reset(struct drm_device *dev)
void
i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
{
- WARN_ON(i915_verify_lists(ring->dev));
-
- /* Retire requests first as we use it above for the early return.
- * If we retire requests last, we may use a later seqno and so clear
- * the requests lists without clearing the active list, leading to
- * confusion.
- */
while (!list_empty(&ring->request_list)) {
struct drm_i915_gem_request *request;

@@ -2324,25 +2273,6 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)

i915_gem_request_retire_upto(request);
}
-
- /* Move any buffers on the active list that are no longer referenced
- * by the ringbuffer to the flushing/inactive lists as appropriate,
- * before we free the context associated with the requests.
- */
- while (!list_empty(&ring->active_list)) {
- struct drm_i915_gem_object *obj;
-
- obj = list_first_entry(&ring->active_list,
- struct drm_i915_gem_object,
- ring_list[ring->id]);
-
- if (!list_empty(&obj->last_read[ring->id].request->link))
- break;
-
- i915_gem_object_retire__read(obj, ring->id);
- }
-
- WARN_ON(i915_verify_lists(ring->dev));
}

void
@@ -2434,13 +2364,13 @@ out:
* write domains, emitting any outstanding lazy request and retiring and
* completed requests.
*/
-static int
+static void
i915_gem_object_flush_active(struct drm_i915_gem_object *obj)
{
int i;

if (!obj->active)
- return 0;
+ return;

for (i = 0; i < I915_NUM_RINGS; i++) {
struct drm_i915_gem_request *req;
@@ -2449,17 +2379,9 @@ i915_gem_object_flush_active(struct drm_i915_gem_object *obj)
if (req == NULL)
continue;

- if (list_empty(&req->link))
- goto retire;
-
- if (i915_gem_request_completed(req)) {
+ if (i915_gem_request_completed(req))
i915_gem_request_retire_upto(req);
-retire:
- i915_gem_object_retire__read(obj, i);
- }
}
-
- return 0;
}

/**
@@ -2507,10 +2429,7 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
}

/* Need to make sure the object gets inactive eventually. */
- ret = i915_gem_object_flush_active(obj);
- if (ret)
- goto out;
-
+ i915_gem_object_flush_active(obj);
if (!obj->active)
goto out;

@@ -2522,8 +2441,6 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
goto out;
}

- drm_gem_object_unreference(&obj->base);
-
for (i = 0; i < I915_NUM_RINGS; i++) {
if (obj->last_read[i].request == NULL)
continue;
@@ -2531,6 +2448,8 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
req[n++] = i915_gem_request_get(obj->last_read[i].request);
}

+out:
+ drm_gem_object_unreference(&obj->base);
mutex_unlock(&dev->struct_mutex);

for (i = 0; i < n; i++) {
@@ -2541,11 +2460,6 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
i915_gem_request_put(req[i]);
}
return ret;
-
-out:
- drm_gem_object_unreference(&obj->base);
- mutex_unlock(&dev->struct_mutex);
- return ret;
}

static int
@@ -2569,7 +2483,7 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
if (ret)
return ret;

- i915_gem_object_retire_request(obj, from);
+ i915_gem_request_retire_upto(from);
} else {
int idx = intel_engine_sync_index(from->engine, to->engine);
if (from->fence.seqno <= from->engine->semaphore.sync_seqno[idx])
@@ -2760,7 +2674,6 @@ int i915_gpu_idle(struct drm_device *dev)
return ret;
}

- WARN_ON(i915_verify_lists(dev));
return 0;
}

@@ -3689,16 +3602,13 @@ i915_gem_busy_ioctl(struct drm_device *dev, void *data,
* become non-busy without any further actions, therefore emit any
* necessary flushes here.
*/
- ret = i915_gem_object_flush_active(obj);
- if (ret)
- goto unref;
+ i915_gem_object_flush_active(obj);

BUILD_BUG_ON(I915_NUM_RINGS > 16);
args->busy = obj->active << 16;
if (obj->last_write.request)
args->busy |= obj->last_write.request->engine->id;

-unref:
drm_gem_object_unreference(&obj->base);
unlock:
mutex_unlock(&dev->struct_mutex);
@@ -3776,7 +3686,12 @@ void i915_gem_object_init(struct drm_i915_gem_object *obj,

INIT_LIST_HEAD(&obj->global_list);
for (i = 0; i < I915_NUM_RINGS; i++)
- INIT_LIST_HEAD(&obj->ring_list[i]);
+ init_request_active(&obj->last_read[i],
+ i915_gem_object_retire__read);
+ init_request_active(&obj->last_write,
+ i915_gem_object_retire__write);
+ init_request_active(&obj->last_fence,
+ i915_gem_object_retire__fence);
INIT_LIST_HEAD(&obj->obj_exec_link);
INIT_LIST_HEAD(&obj->vma_list);
INIT_LIST_HEAD(&obj->batch_pool_link);
@@ -4372,7 +4287,6 @@ i915_gem_cleanup_ringbuffer(struct drm_device *dev)
static void
init_ring_lists(struct intel_engine_cs *ring)
{
- INIT_LIST_HEAD(&ring->active_list);
INIT_LIST_HEAD(&ring->request_list);
}

diff --git a/drivers/gpu/drm/i915/i915_gem_debug.c b/drivers/gpu/drm/i915/i915_gem_debug.c
deleted file mode 100644
index 17299d04189f..000000000000
--- a/drivers/gpu/drm/i915/i915_gem_debug.c
+++ /dev/null
@@ -1,70 +0,0 @@
-/*
- * Copyright © 2008 Intel Corporation
- *
- * Permission is hereby granted, free of charge, to any person obtaining a
- * copy of this software and associated documentation files (the "Software"),
- * to deal in the Software without restriction, including without limitation
- * the rights to use, copy, modify, merge, publish, distribute, sublicense,
- * and/or sell copies of the Software, and to permit persons to whom the
- * Software is furnished to do so, subject to the following conditions:
- *
- * The above copyright notice and this permission notice (including the next
- * paragraph) shall be included in all copies or substantial portions of the
- * Software.
- *
- * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
- * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
- * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
- * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
- * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
- * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
- * IN THE SOFTWARE.
- *
- * Authors:
- * Keith Packard <***@keithp.com>
- *
- */
-
-#include <drm/drmP.h>
-#include <drm/i915_drm.h>
-#include "i915_drv.h"
-
-#if WATCH_LISTS
-int
-i915_verify_lists(struct drm_device *dev)
-{
- static int warned;
- struct drm_i915_private *dev_priv = to_i915(dev);
- struct drm_i915_gem_object *obj;
- struct intel_engine_cs *ring;
- int err = 0;
- int i;
-
- if (warned)
- return 0;
-
- for_each_ring(ring, dev_priv, i) {
- list_for_each_entry(obj, &ring->active_list, ring_list[ring->id]) {
- if (obj->base.dev != dev ||
- !atomic_read(&obj->base.refcount.refcount)) {
- DRM_ERROR("%s: freed active obj %p\n",
- ring->name, obj);
- err++;
- break;
- } else if (!obj->active ||
- obj->last_read_req[ring->id] == NULL) {
- DRM_ERROR("%s: invalid active obj %p\n",
- ring->name, obj);
- err++;
- } else if (obj->base.write_domain) {
- DRM_ERROR("%s: invalid write obj %p (w %x)\n",
- ring->name,
- obj, obj->base.write_domain);
- err++;
- }
- }
- }
-
- return warned = err;
-}
-#endif /* WATCH_LIST */
diff --git a/drivers/gpu/drm/i915/i915_gem_fence.c b/drivers/gpu/drm/i915/i915_gem_fence.c
index ab29c237ffa9..ff085efcf0e5 100644
--- a/drivers/gpu/drm/i915/i915_gem_fence.c
+++ b/drivers/gpu/drm/i915/i915_gem_fence.c
@@ -261,15 +261,7 @@ static inline void i915_gem_object_fence_lost(struct drm_i915_gem_object *obj)
static int
i915_gem_object_wait_fence(struct drm_i915_gem_object *obj)
{
- if (obj->last_fence.request) {
- int ret = i915_wait_request(obj->last_fence.request);
- if (ret)
- return ret;
-
- i915_gem_request_assign(&obj->last_fence.request, NULL);
- }
-
- return 0;
+ return i915_wait_request(obj->last_fence.request);
}

/**
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 7f38d8972721..069c0b9dfd95 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -228,6 +228,7 @@ i915_gem_request_alloc(struct intel_engine_cs *engine,
engine->fence_context,
seqno);

+ INIT_LIST_HEAD(&req->active_list);
req->i915 = dev_priv;
req->engine = engine;
req->reset_counter = reset_counter;
@@ -320,6 +321,27 @@ static void __i915_gem_request_release(struct drm_i915_gem_request *request)
i915_gem_request_put(request);
}

+static void __i915_gem_request_retire_active(struct drm_i915_gem_request *req)
+{
+ struct i915_gem_active *active, *next;
+
+ /* Walk through the active list, calling retire on each. This allows
+ * objects to track their GPU activity and mark themselves as idle
+ * when their *last* active request is completed (updating state
+ * tracking lists for eviction, active references for GEM, etc).
+ *
+ * As the ->retire() may free the node, we decouple it first and
+ * pass along the auxiliary information (to avoid dereferencing
+ * the node after the callback).
+ */
+ list_for_each_entry_safe(active, next, &req->active_list, link) {
+ INIT_LIST_HEAD(&active->link);
+ active->request = NULL;
+
+ active->retire(active, req);
+ }
+}
+
void i915_gem_request_cancel(struct drm_i915_gem_request *req)
{
intel_ring_reserved_space_cancel(req->ring);
@@ -327,6 +349,14 @@ void i915_gem_request_cancel(struct drm_i915_gem_request *req)
if (req->ctx != req->engine->default_context)
intel_lr_context_unpin(req);
}
+
+ /* If a request is to be discarded after actions have been queued upon
+ * it, we cannot unwind that request and it must be submitted rather
+ * than cancelled. This is not limited to activity tracking, but all
+ * other state tracking (such as current register settings etc).
+ */
+ GEM_BUG_ON(!list_empty(&req->active_list));
+
__i915_gem_request_release(req);
}

@@ -344,6 +374,8 @@ static void i915_gem_request_retire(struct drm_i915_gem_request *request)
* completion order.
*/
request->ring->last_retired_head = request->postfix;
+
+ __i915_gem_request_retire_active(request);
__i915_gem_request_release(request);
}

@@ -354,7 +386,6 @@ i915_gem_request_retire_upto(struct drm_i915_gem_request *req)
struct drm_i915_gem_request *tmp;

lockdep_assert_held(&engine->dev->struct_mutex);
-
if (list_empty(&req->link))
return;

@@ -364,8 +395,6 @@ i915_gem_request_retire_upto(struct drm_i915_gem_request *req)

i915_gem_request_retire(tmp);
} while (tmp != req);
-
- WARN_ON(i915_verify_lists(engine->dev));
}

static void i915_gem_mark_busy(struct drm_i915_private *dev_priv)
@@ -565,9 +594,6 @@ int __i915_wait_request(struct drm_i915_gem_request *req,

might_sleep();

- if (list_empty(&req->link))
- return 0;
-
if (i915_gem_request_completed(req))
return 0;

@@ -700,10 +726,12 @@ i915_wait_request(struct drm_i915_gem_request *req)
{
int ret;

- BUG_ON(req == NULL);
+ if (req == NULL)
+ return 0;

- BUG_ON(!mutex_is_locked(&req->i915->dev->struct_mutex));
+ GEM_BUG_ON(list_empty(&req->link));

+ lockdep_assert_held(&req->i915->dev->struct_mutex);
ret = __i915_wait_request(req, req->i915->mm.interruptible, NULL, NULL);
if (ret)
return ret;
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index 01d589be95fd..59957d5edfdb 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -84,6 +84,7 @@ struct drm_i915_gem_request {
/** Batch buffer related to this request if any (used for
error state dump only) */
struct drm_i915_gem_object *batch_obj;
+ struct list_head active_list;

/** Time at which this request was emitted, in jiffies. */
unsigned long emitted_jiffies;
@@ -237,13 +238,26 @@ static inline bool i915_gem_request_completed(struct drm_i915_gem_request *req)
*/
struct i915_gem_active {
struct drm_i915_gem_request *request;
+ struct list_head link;
+ void (*retire)(struct i915_gem_active *,
+ struct drm_i915_gem_request *);
};

static inline void
+init_request_active(struct i915_gem_active *active,
+ void (*func)(struct i915_gem_active *,
+ struct drm_i915_gem_request *))
+{
+ INIT_LIST_HEAD(&active->link);
+ active->retire = func;
+}
+
+static inline void
i915_gem_request_mark_active(struct drm_i915_gem_request *request,
struct i915_gem_active *active)
{
- i915_gem_request_assign(&active->request, request);
+ list_move(&active->link, &request->active_list);
+ active->request = request;
}

#endif /* I915_GEM_REQUEST_H */
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 0f0bf97e4032..b5f62b5f4913 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1558,7 +1558,6 @@ static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *rin
ring->dev = dev;
ring->i915 = to_i915(dev);
ring->fence_context = fence_context_alloc(1);
- INIT_LIST_HEAD(&ring->active_list);
INIT_LIST_HEAD(&ring->request_list);
i915_gem_batch_pool_init(dev, &ring->batch_pool);
intel_engine_init_breadcrumbs(ring);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 213540f92c9d..7ca4e1fc854d 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -2025,7 +2025,6 @@ static int intel_init_engine(struct drm_device *dev,
engine->dev = dev;
engine->i915 = to_i915(dev);
engine->fence_context = fence_context_alloc(1);
- INIT_LIST_HEAD(&engine->active_list);
INIT_LIST_HEAD(&engine->request_list);
INIT_LIST_HEAD(&engine->execlist_queue);
INIT_LIST_HEAD(&engine->buffers);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index fc9c1e453be1..bb92d831a100 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -298,18 +298,6 @@ struct intel_engine_cs {
u32 irq_keep_mask; /* bitmask for interrupts that should not be masked */

/**
- * List of objects currently involved in rendering from the
- * ringbuffer.
- *
- * Includes buffers having the contents of their GPU caches
- * flushed, not necessarily primitives. last_read_req
- * represents when the rendering involved will be completed.
- *
- * A reference is held on the buffer while on this list.
- */
- struct list_head active_list;
-
- /**
* List of breadcrumbs associated with GPU requests currently
* outstanding.
*/
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:12 UTC
Permalink
Perform s/ringbuf/ring/ on the context struct for consistency with the
ring/engine split.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 2 +-
drivers/gpu/drm/i915/i915_drv.h | 2 +-
drivers/gpu/drm/i915/i915_guc_submission.c | 6 +--
drivers/gpu/drm/i915/intel_lrc.c | 63 ++++++++++++++----------------
4 files changed, 35 insertions(+), 38 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 018076c89247..6e91726db8d3 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -1988,7 +1988,7 @@ static int i915_context_status(struct seq_file *m, void *unused)
struct drm_i915_gem_object *ctx_obj =
ctx->engine[i].state;
struct intel_ringbuffer *ringbuf =
- ctx->engine[i].ringbuf;
+ ctx->engine[i].ring;

seq_printf(m, "%s: ", ring->name);
if (ctx_obj)
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index baede4517c70..9f06dd19bfb2 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -885,7 +885,7 @@ struct intel_context {
/* Execlists */
struct {
struct drm_i915_gem_object *state;
- struct intel_ringbuffer *ringbuf;
+ struct intel_ringbuffer *ring;
int pin_count;
} engine[I915_NUM_RINGS];

diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
index 53abe2143f8a..b47e630e048a 100644
--- a/drivers/gpu/drm/i915/i915_guc_submission.c
+++ b/drivers/gpu/drm/i915/i915_guc_submission.c
@@ -390,7 +390,7 @@ static void guc_init_ctx_desc(struct intel_guc *guc,

for (i = 0; i < I915_NUM_RINGS; i++) {
struct guc_execlist_context *lrc = &desc.lrc[i];
- struct intel_ringbuffer *ringbuf = ctx->engine[i].ringbuf;
+ struct intel_ringbuffer *ring = ctx->engine[i].ring;
struct intel_engine_cs *engine;
struct drm_i915_gem_object *obj;
uint64_t ctx_desc;
@@ -406,7 +406,7 @@ static void guc_init_ctx_desc(struct intel_guc *guc,
if (!obj)
break; /* XXX: continue? */

- engine = ringbuf->engine;
+ engine = ring->engine;
ctx_desc = intel_lr_context_descriptor(ctx, engine);
lrc->context_desc = (u32)ctx_desc;

@@ -416,7 +416,7 @@ static void guc_init_ctx_desc(struct intel_guc *guc,
lrc->context_id = (client->ctx_index << GUC_ELC_CTXID_OFFSET) |
(engine->id << GUC_ELC_ENGINE_OFFSET);

- obj = ringbuf->obj;
+ obj = ring->obj;

lrc->ring_begin = i915_gem_obj_ggtt_offset(obj);
lrc->ring_end = lrc->ring_begin + obj->base.size - 1;
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 8639ebfab96f..65beb7267d1a 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -402,24 +402,24 @@ static void execlists_submit_requests(struct drm_i915_gem_request *rq0,
execlists_elsp_write(rq0, rq1);
}

-static void execlists_context_unqueue(struct intel_engine_cs *ring)
+static void execlists_context_unqueue(struct intel_engine_cs *engine)
{
struct drm_i915_gem_request *req0 = NULL, *req1 = NULL;
struct drm_i915_gem_request *cursor = NULL, *tmp = NULL;

- assert_spin_locked(&ring->execlist_lock);
+ assert_spin_locked(&engine->execlist_lock);

/*
* If irqs are not active generate a warning as batches that finish
* without the irqs may get lost and a GPU Hang may occur.
*/
- WARN_ON(!intel_irqs_enabled(ring->dev->dev_private));
+ WARN_ON(!intel_irqs_enabled(engine->dev->dev_private));

- if (list_empty(&ring->execlist_queue))
+ if (list_empty(&engine->execlist_queue))
return;

/* Try to read in pairs */
- list_for_each_entry_safe(cursor, tmp, &ring->execlist_queue,
+ list_for_each_entry_safe(cursor, tmp, &engine->execlist_queue,
execlist_link) {
if (!req0) {
req0 = cursor;
@@ -429,7 +429,7 @@ static void execlists_context_unqueue(struct intel_engine_cs *ring)
cursor->elsp_submitted = req0->elsp_submitted;
list_del(&req0->execlist_link);
list_add_tail(&req0->execlist_link,
- &ring->execlist_retired_req_list);
+ &engine->execlist_retired_req_list);
req0 = cursor;
} else {
req1 = cursor;
@@ -437,7 +437,7 @@ static void execlists_context_unqueue(struct intel_engine_cs *ring)
}
}

- if (IS_GEN8(ring->dev) || IS_GEN9(ring->dev)) {
+ if (IS_GEN8(engine->dev) || IS_GEN9(engine->dev)) {
/*
* WaIdleLiteRestore: make sure we never cause a lite
* restore with HEAD==TAIL
@@ -449,11 +449,11 @@ static void execlists_context_unqueue(struct intel_engine_cs *ring)
* for where we prepare the padding after the end of the
* request.
*/
- struct intel_ringbuffer *ringbuf;
+ struct intel_ringbuffer *ring;

- ringbuf = req0->ctx->engine[ring->id].ringbuf;
+ ring = req0->ctx->engine[engine->id].ring;
req0->tail += 8;
- req0->tail &= ringbuf->size - 1;
+ req0->tail &= ring->size - 1;
}
}

@@ -671,7 +671,7 @@ int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request
{
int ret;

- request->ring = request->ctx->engine[request->engine->id].ringbuf;
+ request->ring = request->ctx->engine[request->engine->id].ring;

if (request->ctx != request->engine->default_context) {
ret = intel_lr_context_pin(request);
@@ -1775,7 +1775,7 @@ static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *rin
ret = intel_lr_context_do_pin(
ring,
ring->default_context->engine[ring->id].state,
- ring->default_context->engine[ring->id].ringbuf);
+ ring->default_context->engine[ring->id].ring);
if (ret) {
DRM_ERROR(
"Failed to pin and map ringbuffer %s: %d\n",
@@ -2177,16 +2177,15 @@ void intel_lr_context_free(struct intel_context *ctx)
struct drm_i915_gem_object *ctx_obj = ctx->engine[i].state;

if (ctx_obj) {
- struct intel_ringbuffer *ringbuf =
- ctx->engine[i].ringbuf;
- struct intel_engine_cs *engine = ringbuf->engine;
+ struct intel_ringbuffer *ring = ctx->engine[i].ring;
+ struct intel_engine_cs *engine = ring->engine;

if (ctx == engine->default_context) {
- intel_unpin_ringbuffer_obj(ringbuf);
+ intel_unpin_ringbuffer_obj(ring);
i915_gem_object_ggtt_unpin(ctx_obj);
}
WARN_ON(ctx->engine[engine->id].pin_count);
- intel_ringbuffer_free(ringbuf);
+ intel_ringbuffer_free(ring);
drm_gem_object_unreference(&ctx_obj->base);
}
}
@@ -2266,7 +2265,7 @@ int intel_lr_context_deferred_alloc(struct intel_context *ctx,
{
struct drm_i915_gem_object *ctx_obj;
uint32_t context_size;
- struct intel_ringbuffer *ringbuf;
+ struct intel_ringbuffer *ring;
int ret;

WARN_ON(ctx->legacy_hw_ctx.rcs_state != NULL);
@@ -2283,19 +2282,19 @@ int intel_lr_context_deferred_alloc(struct intel_context *ctx,
return -ENOMEM;
}

- ringbuf = intel_engine_create_ringbuffer(engine, 4 * PAGE_SIZE);
- if (IS_ERR(ringbuf)) {
- ret = PTR_ERR(ringbuf);
+ ring = intel_engine_create_ringbuffer(engine, 4 * PAGE_SIZE);
+ if (IS_ERR(ring)) {
+ ret = PTR_ERR(ring);
goto error_deref_obj;
}

- ret = populate_lr_context(ctx, ctx_obj, engine, ringbuf);
+ ret = populate_lr_context(ctx, ctx_obj, engine, ring);
if (ret) {
DRM_DEBUG_DRIVER("Failed to populate LRC: %d\n", ret);
goto error_ringbuf;
}

- ctx->engine[engine->id].ringbuf = ringbuf;
+ ctx->engine[engine->id].ring = ring;
ctx->engine[engine->id].state = ctx_obj;

if (ctx != engine->default_context && engine->init_context) {
@@ -2320,10 +2319,10 @@ int intel_lr_context_deferred_alloc(struct intel_context *ctx,
return 0;

error_ringbuf:
- intel_ringbuffer_free(ringbuf);
+ intel_ringbuffer_free(ring);
error_deref_obj:
drm_gem_object_unreference(&ctx_obj->base);
- ctx->engine[engine->id].ringbuf = NULL;
+ ctx->engine[engine->id].ring = NULL;
ctx->engine[engine->id].state = NULL;
return ret;
}
@@ -2332,14 +2331,12 @@ void intel_lr_context_reset(struct drm_device *dev,
struct intel_context *ctx)
{
struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_engine_cs *ring;
+ struct intel_engine_cs *unused;
int i;

- for_each_ring(ring, dev_priv, i) {
- struct drm_i915_gem_object *ctx_obj =
- ctx->engine[ring->id].state;
- struct intel_ringbuffer *ringbuf =
- ctx->engine[ring->id].ringbuf;
+ for_each_ring(unused, dev_priv, i) {
+ struct drm_i915_gem_object *ctx_obj = ctx->engine[i].state;
+ struct intel_ringbuffer *ring = ctx->engine[i].ring;
uint32_t *reg_state;
struct page *page;

@@ -2358,7 +2355,7 @@ void intel_lr_context_reset(struct drm_device *dev,

kunmap_atomic(reg_state);

- ringbuf->head = 0;
- ringbuf->tail = 0;
+ ring->head = 0;
+ ring->tail = 0;
}
}
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:27 UTC
Permalink
Since the function is a small wrapper around schedule_delayed_work(),
move it inline to remove the function call overhead for the principle
caller.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_drv.h | 17 ++++++++++++++++-
drivers/gpu/drm/i915/i915_irq.c | 16 ----------------
2 files changed, 16 insertions(+), 17 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 188bed933f11..201dd330f66a 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2709,7 +2709,22 @@ void intel_hpd_cancel_work(struct drm_i915_private *dev_priv);
bool intel_hpd_pin_to_port(enum hpd_pin pin, enum port *port);

/* i915_irq.c */
-void i915_queue_hangcheck(struct drm_i915_private *dev_priv);
+static inline void i915_queue_hangcheck(struct drm_i915_private *dev_priv)
+{
+ unsigned long delay;
+
+ if (unlikely(!i915.enable_hangcheck))
+ return;
+
+ /* Don't continually defer the hangcheck so that it is always run at
+ * least once after work has been scheduled on any ring. Otherwise,
+ * we will ignore a hung ring if a second ring is kept busy.
+ */
+
+ delay = round_jiffies_up_relative(DRM_I915_HANGCHECK_JIFFIES);
+ schedule_delayed_work(&dev_priv->gpu_error.hangcheck_work, delay);
+}
+
__printf(3, 4)
void i915_handle_error(struct drm_device *dev, bool wedged,
const char *fmt, ...);
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index 8939438d747d..2a8a9694eec5 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -3173,22 +3173,6 @@ out:
ENABLE_RPM_WAKEREF_ASSERTS(dev_priv);
}

-void i915_queue_hangcheck(struct drm_i915_private *dev_priv)
-{
- unsigned long delay;
-
- if (!i915.enable_hangcheck)
- return;
-
- /* Don't continually defer the hangcheck so that it is always run at
- * least once after work has been scheduled on any ring. Otherwise,
- * we will ignore a hung ring if a second ring is kept busy.
- */
-
- delay = round_jiffies_up_relative(DRM_I915_HANGCHECK_JIFFIES);
- schedule_delayed_work(&dev_priv->gpu_error.hangcheck_work, delay);
-}
-
static void ibx_irq_reset(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:46 UTC
Permalink
We know, by design, that whilst the GPU is active (and thus we are
throttling) the retire_worker is queued. Therefore attempting to requeue
it with queue_delayed_work() is a no-op and we can safely remove it.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem.c | 3 ---
1 file changed, 3 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index efd46adb978b..e9f5ca7ea835 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4116,9 +4116,6 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
return 0;

ret = __i915_wait_request(target, true, NULL, NULL);
- if (ret == 0)
- queue_delayed_work(dev_priv->wq, &dev_priv->mm.retire_work, 0);
-
i915_gem_request_unreference__unlocked(target);

return ret;
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:32 UTC
Permalink
This patch is broken out of the next just to remove the code motion from
that patch and make it more readable. What we do here is move the
i915_vma_move_to_active() to i915_gem_execbuffer.c and put the three
stages (read, write, fenced) together so that future modifications to
active handling are all located in the same spot. The importance of this
is so that we can more simply control the order in which the requests
are place in the retirement list (i.e. control the order at which we
retire and so control the lifetimes to avoid having to hold onto
references).

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_drv.h | 3 +-
drivers/gpu/drm/i915/i915_gem.c | 15 -------
drivers/gpu/drm/i915/i915_gem_context.c | 7 ++--
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 63 ++++++++++++++++++----------
drivers/gpu/drm/i915/i915_gem_render_state.c | 2 +-
5 files changed, 49 insertions(+), 41 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 0cc3ee589dfb..aa9d3782107e 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2764,7 +2764,8 @@ int __must_check i915_mutex_lock_interruptible(struct drm_device *dev);
int i915_gem_object_sync(struct drm_i915_gem_object *obj,
struct drm_i915_gem_request *to);
void i915_vma_move_to_active(struct i915_vma *vma,
- struct drm_i915_gem_request *req);
+ struct drm_i915_gem_request *req,
+ unsigned flags);
int i915_gem_dumb_create(struct drm_file *file_priv,
struct drm_device *dev,
struct drm_mode_create_dumb *args);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 9a22fdd8a9f5..164ebdaa0369 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2026,21 +2026,6 @@ void *i915_gem_object_pin_vmap(struct drm_i915_gem_object *obj)
return obj->vmapping;
}

-void i915_vma_move_to_active(struct i915_vma *vma,
- struct drm_i915_gem_request *req)
-{
- struct drm_i915_gem_object *obj = vma->obj;
- struct intel_engine_cs *engine = req->engine;
-
- /* Add a reference if we're newly entering the active list. */
- if (obj->active == 0)
- drm_gem_object_reference(&obj->base);
- obj->active |= intel_engine_flag(engine);
-
- i915_gem_request_mark_active(req, &obj->last_read[engine->id]);
- list_move_tail(&vma->vm_link, &vma->vm->active_list);
-}
-
static void
i915_gem_object_retire__fence(struct i915_gem_active *active,
struct drm_i915_gem_request *req)
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index fab702abd1cb..310a770b7984 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -778,8 +778,8 @@ static int do_switch(struct drm_i915_gem_request *req)
* MI_SET_CONTEXT instead of when the next seqno has completed.
*/
if (from != NULL) {
- from->legacy_hw_ctx.rcs_state->base.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
- i915_vma_move_to_active(i915_gem_obj_to_ggtt(from->legacy_hw_ctx.rcs_state), req);
+ struct drm_i915_gem_object *obj = from->legacy_hw_ctx.rcs_state;
+
/* As long as MI_SET_CONTEXT is serializing, ie. it flushes the
* whole damn pipeline, we don't need to explicitly mark the
* object dirty. The only exception is that the context must be
@@ -787,7 +787,8 @@ static int do_switch(struct drm_i915_gem_request *req)
* able to defer doing this until we know the object would be
* swapped, but there is no way to do that yet.
*/
- from->legacy_hw_ctx.rcs_state->dirty = 1;
+ obj->base.read_domains = I915_GEM_DOMAIN_INSTRUCTION;
+ i915_vma_move_to_active(i915_gem_obj_to_ggtt(obj), req, 0);

/* obj is kept alive until the next request by its active ref */
i915_gem_object_ggtt_unpin(from->legacy_hw_ctx.rcs_state);
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index c10795f58bfc..9e549bded186 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1104,6 +1104,44 @@ i915_gem_validate_context(struct drm_device *dev, struct drm_file *file,
return ctx;
}

+void i915_vma_move_to_active(struct i915_vma *vma,
+ struct drm_i915_gem_request *req,
+ unsigned flags)
+{
+ struct drm_i915_gem_object *obj = vma->obj;
+ const unsigned engine = req->engine->id;
+
+ GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
+
+ obj->dirty = 1; /* be paranoid */
+
+ /* Add a reference if we're newly entering the active list. */
+ if (obj->active == 0)
+ drm_gem_object_reference(&obj->base);
+ obj->active |= 1 << engine;
+ i915_gem_request_mark_active(req, &obj->last_read[engine]);
+
+ if (flags & EXEC_OBJECT_WRITE) {
+ i915_gem_request_mark_active(req, &obj->last_write);
+
+ intel_fb_obj_invalidate(obj, ORIGIN_CS);
+
+ /* update for the implicit flush after a batch */
+ obj->base.write_domain &= ~I915_GEM_GPU_DOMAINS;
+ }
+
+ if (flags & EXEC_OBJECT_NEEDS_FENCE) {
+ i915_gem_request_mark_active(req, &obj->last_fence);
+ if (flags & __EXEC_OBJECT_HAS_FENCE) {
+ struct drm_i915_private *dev_priv = req->i915;
+ list_move_tail(&dev_priv->fence_regs[obj->fence_reg].lru_list,
+ &dev_priv->mm.fence_list);
+ }
+ }
+
+ list_move_tail(&vma->vm_link, &vma->vm->active_list);
+}
+
static void
i915_gem_execbuffer_move_to_active(struct list_head *vmas,
struct drm_i915_gem_request *req)
@@ -1111,35 +1149,18 @@ i915_gem_execbuffer_move_to_active(struct list_head *vmas,
struct i915_vma *vma;

list_for_each_entry(vma, vmas, exec_list) {
- struct drm_i915_gem_exec_object2 *entry = vma->exec_entry;
struct drm_i915_gem_object *obj = vma->obj;
u32 old_read = obj->base.read_domains;
u32 old_write = obj->base.write_domain;

- obj->dirty = 1; /* be paranoid */
obj->base.write_domain = obj->base.pending_write_domain;
- if (obj->base.write_domain == 0)
+ if (obj->base.write_domain)
+ vma->exec_entry->flags |= EXEC_OBJECT_WRITE;
+ else
obj->base.pending_read_domains |= obj->base.read_domains;
obj->base.read_domains = obj->base.pending_read_domains;

- i915_vma_move_to_active(vma, req);
- if (obj->base.write_domain) {
- i915_gem_request_mark_active(req, &obj->last_write);
-
- intel_fb_obj_invalidate(obj, ORIGIN_CS);
-
- /* update for the implicit flush after a batch */
- obj->base.write_domain &= ~I915_GEM_GPU_DOMAINS;
- }
- if (entry->flags & EXEC_OBJECT_NEEDS_FENCE) {
- i915_gem_request_mark_active(req, &obj->last_fence);
- if (entry->flags & __EXEC_OBJECT_HAS_FENCE) {
- struct drm_i915_private *dev_priv = req->i915;
- list_move_tail(&dev_priv->fence_regs[obj->fence_reg].lru_list,
- &dev_priv->mm.fence_list);
- }
- }
-
+ i915_vma_move_to_active(vma, req, vma->exec_entry->flags);
trace_i915_gem_object_change_domain(obj, old_read, old_write);
}
}
diff --git a/drivers/gpu/drm/i915/i915_gem_render_state.c b/drivers/gpu/drm/i915/i915_gem_render_state.c
index 222f25777bb4..68054f5c4ab1 100644
--- a/drivers/gpu/drm/i915/i915_gem_render_state.c
+++ b/drivers/gpu/drm/i915/i915_gem_render_state.c
@@ -230,7 +230,7 @@ int i915_gem_render_state_init(struct drm_i915_gem_request *req)
goto out;
}

- i915_vma_move_to_active(i915_gem_obj_to_ggtt(so.obj), req);
+ i915_vma_move_to_active(i915_gem_obj_to_ggtt(so.obj), req, 0);
out:
render_state_fini(&so);
return ret;
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:04 UTC
Permalink
Rather than recomputing whether semaphores are enabled, we can do that
computation once during early initialisation as the i915.semaphores
module parameter is now read-only.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 2 +-
drivers/gpu/drm/i915/i915_dma.c | 2 +-
drivers/gpu/drm/i915/i915_drv.c | 25 -----------------------
drivers/gpu/drm/i915/i915_drv.h | 1 -
drivers/gpu/drm/i915/i915_gem.c | 35 ++++++++++++++++++++++++++++++---
drivers/gpu/drm/i915/i915_gem_context.c | 2 +-
drivers/gpu/drm/i915/i915_gpu_error.c | 2 +-
drivers/gpu/drm/i915/intel_ringbuffer.c | 20 +++++++++----------
8 files changed, 46 insertions(+), 43 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 5335072f2047..387ae77d3c29 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -3146,7 +3146,7 @@ static int i915_semaphore_status(struct seq_file *m, void *unused)
int num_rings = hweight32(INTEL_INFO(dev)->ring_mask);
int i, j, ret;

- if (!i915_semaphore_is_enabled(dev)) {
+ if (!i915.semaphores) {
seq_puts(m, "Semaphores are disabled\n");
return 0;
}
diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
index 9e49e304dd8e..4c72c83cfa28 100644
--- a/drivers/gpu/drm/i915/i915_dma.c
+++ b/drivers/gpu/drm/i915/i915_dma.c
@@ -126,7 +126,7 @@ static int i915_getparam(struct drm_device *dev, void *data,
value = 1;
break;
case I915_PARAM_HAS_SEMAPHORES:
- value = i915_semaphore_is_enabled(dev);
+ value = i915.semaphores;
break;
case I915_PARAM_HAS_PRIME_VMAP_FLUSH:
value = 1;
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index e9f85fd0542f..cc831a34f7bb 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -515,31 +515,6 @@ void intel_detect_pch(struct drm_device *dev)
pci_dev_put(pch);
}

-bool i915_semaphore_is_enabled(struct drm_device *dev)
-{
- if (INTEL_INFO(dev)->gen < 6)
- return false;
-
- if (i915.semaphores >= 0)
- return i915.semaphores;
-
- /* TODO: make semaphores and Execlists play nicely together */
- if (i915.enable_execlists)
- return false;
-
- /* Until we get further testing... */
- if (IS_GEN8(dev))
- return false;
-
-#ifdef CONFIG_INTEL_IOMMU
- /* Enable semaphores on SNB when IO remapping is off */
- if (INTEL_INFO(dev)->gen == 6 && intel_iommu_gfx_mapped)
- return false;
-#endif
-
- return true;
-}
-
static void intel_suspend_encoders(struct drm_i915_private *dev_priv)
{
struct drm_device *dev = dev_priv->dev;
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 56cf2ffc1eac..58e9e5e50769 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3226,7 +3226,6 @@ extern void intel_set_memory_cxsr(struct drm_i915_private *dev_priv,
extern void intel_detect_pch(struct drm_device *dev);
extern int intel_enable_rc6(const struct drm_device *dev);

-extern bool i915_semaphore_is_enabled(struct drm_device *dev);
int i915_reg_read_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
int i915_get_reset_stats_ioctl(struct drm_device *dev, void *data,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index a4f9c5bbb883..31926a4fb42a 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2567,7 +2567,7 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
if (i915_gem_request_completed(from_req))
return 0;

- if (!i915_semaphore_is_enabled(obj->base.dev)) {
+ if (!i915.semaphores) {
struct drm_i915_private *i915 = to_i915(obj->base.dev);
ret = __i915_wait_request(from_req,
i915->mm.interruptible,
@@ -4304,13 +4304,42 @@ out:
return ret;
}

+static bool i915_gem_sanitize_semaphore(struct drm_i915_private *dev_priv,
+ int param_value)
+{
+ if (INTEL_INFO(dev_priv)->gen < 6)
+ return false;
+
+ if (param_value >= 0)
+ return param_value;
+
+ /* TODO: make semaphores and Execlists play nicely together */
+ if (i915.enable_execlists)
+ return false;
+
+ /* Until we get further testing... */
+ if (IS_GEN8(dev_priv))
+ return false;
+
+#ifdef CONFIG_INTEL_IOMMU
+ /* Enable semaphores on SNB when IO remapping is off */
+ if (INTEL_INFO(dev_priv)->gen == 6 && intel_iommu_gfx_mapped)
+ return false;
+#endif
+
+ return true;
+}
+
int i915_gem_init(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
int ret;

- i915.enable_execlists = intel_sanitize_enable_execlists(dev,
- i915.enable_execlists);
+ i915.enable_execlists =
+ intel_sanitize_enable_execlists(dev, i915.enable_execlists);
+
+ i915.semaphores =
+ i915_gem_sanitize_semaphore(dev_priv, i915.semaphores);

mutex_lock(&dev->struct_mutex);

diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 0aea5ccf6d68..361be1085a18 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -523,7 +523,7 @@ mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
u32 flags = hw_flags | MI_MM_SPACE_GTT;
const int num_rings =
/* Use an extended w/a on ivb+ if signalling from other rings */
- i915_semaphore_is_enabled(ring->dev) ?
+ i915.semaphores ?
hweight32(INTEL_INFO(ring->dev)->ring_mask) - 1 :
0;
int len, i, ret;
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 05f054898a95..84ce91275fdd 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -823,7 +823,7 @@ static void gen8_record_semaphore_state(struct drm_i915_private *dev_priv,
struct intel_engine_cs *to;
int i;

- if (!i915_semaphore_is_enabled(dev_priv->dev))
+ if (!i915.semaphores)
return;

if (!error->semaphore_obj)
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 02b7032e16e0..e143da96dcfa 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -2510,7 +2510,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
ring->mmio_base = RENDER_RING_BASE;

if (INTEL_INFO(dev)->gen >= 8) {
- if (i915_semaphore_is_enabled(dev)) {
+ if (i915.semaphores) {
obj = i915_gem_alloc_object(dev, 4096);
if (obj == NULL) {
DRM_ERROR("Failed to allocate semaphore bo. Disabling semaphores\n");
@@ -2534,7 +2534,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
ring->irq_disable = gen8_ring_disable_irq;
ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
ring->irq_seqno_barrier = gen6_seqno_barrier;
- if (i915_semaphore_is_enabled(dev)) {
+ if (i915.semaphores) {
WARN_ON(!dev_priv->semaphore_obj);
ring->semaphore.sync_to = gen8_ring_sync;
ring->semaphore.signal = gen8_rcs_signal;
@@ -2550,7 +2550,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
ring->irq_disable = gen6_ring_disable_irq;
ring->irq_enable_mask = GT_RENDER_USER_INTERRUPT;
ring->irq_seqno_barrier = gen6_seqno_barrier;
- if (i915_semaphore_is_enabled(dev)) {
+ if (i915.semaphores) {
ring->semaphore.sync_to = gen6_ring_sync;
ring->semaphore.signal = gen6_signal;
/*
@@ -2666,7 +2666,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
ring->irq_disable = gen8_ring_disable_irq;
ring->dispatch_execbuffer =
gen8_ring_dispatch_execbuffer;
- if (i915_semaphore_is_enabled(dev)) {
+ if (i915.semaphores) {
ring->semaphore.sync_to = gen8_ring_sync;
ring->semaphore.signal = gen8_xcs_signal;
GEN8_RING_SEMAPHORE_INIT;
@@ -2677,7 +2677,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
ring->irq_disable = gen6_ring_disable_irq;
ring->dispatch_execbuffer =
gen6_ring_dispatch_execbuffer;
- if (i915_semaphore_is_enabled(dev)) {
+ if (i915.semaphores) {
ring->semaphore.sync_to = gen6_ring_sync;
ring->semaphore.signal = gen6_signal;
ring->semaphore.mbox.wait[RCS] = MI_SEMAPHORE_SYNC_VR;
@@ -2734,7 +2734,7 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev)
ring->irq_disable = gen8_ring_disable_irq;
ring->dispatch_execbuffer =
gen8_ring_dispatch_execbuffer;
- if (i915_semaphore_is_enabled(dev)) {
+ if (i915.semaphores) {
ring->semaphore.sync_to = gen8_ring_sync;
ring->semaphore.signal = gen8_xcs_signal;
GEN8_RING_SEMAPHORE_INIT;
@@ -2763,7 +2763,7 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
ring->irq_enable = gen8_ring_enable_irq;
ring->irq_disable = gen8_ring_disable_irq;
ring->dispatch_execbuffer = gen8_ring_dispatch_execbuffer;
- if (i915_semaphore_is_enabled(dev)) {
+ if (i915.semaphores) {
ring->semaphore.sync_to = gen8_ring_sync;
ring->semaphore.signal = gen8_xcs_signal;
GEN8_RING_SEMAPHORE_INIT;
@@ -2773,7 +2773,7 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
ring->irq_enable = gen6_ring_enable_irq;
ring->irq_disable = gen6_ring_disable_irq;
ring->dispatch_execbuffer = gen6_ring_dispatch_execbuffer;
- if (i915_semaphore_is_enabled(dev)) {
+ if (i915.semaphores) {
ring->semaphore.signal = gen6_signal;
ring->semaphore.sync_to = gen6_ring_sync;
/*
@@ -2820,7 +2820,7 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
ring->irq_enable = gen8_ring_enable_irq;
ring->irq_disable = gen8_ring_disable_irq;
ring->dispatch_execbuffer = gen8_ring_dispatch_execbuffer;
- if (i915_semaphore_is_enabled(dev)) {
+ if (i915.semaphores) {
ring->semaphore.sync_to = gen8_ring_sync;
ring->semaphore.signal = gen8_xcs_signal;
GEN8_RING_SEMAPHORE_INIT;
@@ -2830,7 +2830,7 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
ring->irq_enable = hsw_vebox_enable_irq;
ring->irq_disable = hsw_vebox_disable_irq;
ring->dispatch_execbuffer = gen6_ring_dispatch_execbuffer;
- if (i915_semaphore_is_enabled(dev)) {
+ if (i915.semaphores) {
ring->semaphore.sync_to = gen6_ring_sync;
ring->semaphore.signal = gen6_signal;
ring->semaphore.mbox.wait[RCS] = MI_SEMAPHORE_SYNC_VER;
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:28 UTC
Permalink
As we can now have multiple VMA inside the global GTT (with partial
mappings, rotations, etc), it is no longer true that there may just be a
single GGTT entry and so we should walk the full vma_list to count up
the actual usage. In addition to unifying the two walkers, switch from
multiplying the object size for each vma to summing the bound vma sizes.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 46 +++++++++++++++----------------------
1 file changed, 18 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index f311df758195..dd1788c81b90 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -332,6 +332,7 @@ static int per_file_stats(int id, void *ptr, void *data)
struct drm_i915_gem_object *obj = ptr;
struct file_stats *stats = data;
struct i915_vma *vma;
+ int bound = 0;

stats->count++;
stats->total += obj->base.size;
@@ -339,41 +340,30 @@ static int per_file_stats(int id, void *ptr, void *data)
if (obj->base.name || obj->base.dma_buf)
stats->shared += obj->base.size;

- if (USES_FULL_PPGTT(obj->base.dev)) {
- list_for_each_entry(vma, &obj->vma_list, obj_link) {
- struct i915_hw_ppgtt *ppgtt;
+ list_for_each_entry(vma, &obj->vma_list, obj_link) {
+ if (!drm_mm_node_allocated(&vma->node))
+ continue;

- if (!drm_mm_node_allocated(&vma->node))
- continue;
+ bound++;

- if (i915_is_ggtt(vma->vm)) {
- stats->global += obj->base.size;
- continue;
- }
-
- ppgtt = container_of(vma->vm, struct i915_hw_ppgtt, base);
+ if (i915_is_ggtt(vma->vm)) {
+ stats->global += vma->node.size;
+ } else {
+ struct i915_hw_ppgtt *ppgtt
+ = container_of(vma->vm,
+ struct i915_hw_ppgtt,
+ base);
if (ppgtt->file_priv != stats->file_priv)
continue;
-
- if (obj->active) /* XXX per-vma statistic */
- stats->active += obj->base.size;
- else
- stats->inactive += obj->base.size;
-
- return 0;
- }
- } else {
- if (i915_gem_obj_ggtt_bound(obj)) {
- stats->global += obj->base.size;
- if (obj->active)
- stats->active += obj->base.size;
- else
- stats->inactive += obj->base.size;
- return 0;
}
+
+ if (obj->active) /* XXX per-vma statistic */
+ stats->active += vma->node.size;
+ else
+ stats->inactive += vma->node.size;
}

- if (!list_empty(&obj->global_list))
+ if (!bound)
stats->unbound += obj->base.size;

return 0;
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:31 UTC
Permalink
For the global GTT (and aliasing GTT), the address space is owned by the
device (it is a global resource) and so the per-file owner field is
NULL. For per-process GTT (where we create an address space per
context), each is owned by the opening file. We can use this ownership
information to both distinguish GGTT and ppGTT address spaces, as well
as occasionally inspect the owner.

v2: Whitespace, tells us who owns i915_address_space

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 2 +-
drivers/gpu/drm/i915/i915_drv.h | 1 -
drivers/gpu/drm/i915/i915_gem_context.c | 3 ++-
drivers/gpu/drm/i915/i915_gem_gtt.c | 27 ++++++++++++++-------------
drivers/gpu/drm/i915/i915_gem_gtt.h | 21 ++++++++++++++-------
5 files changed, 31 insertions(+), 23 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 99a6181b012e..0d1f470567b0 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -352,7 +352,7 @@ static int per_file_stats(int id, void *ptr, void *data)
= container_of(vma->vm,
struct i915_hw_ppgtt,
base);
- if (ppgtt->file_priv != stats->file_priv)
+ if (ppgtt->base.file != stats->file_priv)
continue;
}

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index f840cc55f1ab..0cc3ee589dfb 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2913,7 +2913,6 @@ i915_vm_to_ppgtt(struct i915_address_space *vm)
return container_of(vm, struct i915_hw_ppgtt, base);
}

-
static inline bool i915_gem_obj_ggtt_bound(struct drm_i915_gem_object *obj)
{
return i915_gem_obj_ggtt_bound_view(obj, &i915_ggtt_view_normal);
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 05b4e0e85f24..fab702abd1cb 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -296,7 +296,8 @@ i915_gem_create_context(struct drm_device *dev,
}

if (USES_FULL_PPGTT(dev)) {
- struct i915_hw_ppgtt *ppgtt = i915_ppgtt_create(dev, file_priv);
+ struct i915_hw_ppgtt *ppgtt =
+ i915_ppgtt_create(to_i915(dev), file_priv);

if (IS_ERR_OR_NULL(ppgtt)) {
DRM_DEBUG_DRIVER("PPGTT setup failed (%ld)\n",
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 06117bd0fc00..3a07ff622bd6 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2112,11 +2112,12 @@ static int gen6_ppgtt_init(struct i915_hw_ppgtt *ppgtt)
return 0;
}

-static int __hw_ppgtt_init(struct drm_device *dev, struct i915_hw_ppgtt *ppgtt)
+static int __hw_ppgtt_init(struct i915_hw_ppgtt *ppgtt,
+ struct drm_i915_private *dev_priv)
{
- ppgtt->base.dev = dev;
+ ppgtt->base.dev = dev_priv->dev;

- if (INTEL_INFO(dev)->gen < 8)
+ if (INTEL_INFO(dev_priv)->gen < 8)
return gen6_ppgtt_init(ppgtt);
else
return gen8_ppgtt_init(ppgtt);
@@ -2132,15 +2133,17 @@ static void i915_address_space_init(struct i915_address_space *vm,
list_add_tail(&vm->global_link, &dev_priv->vm_list);
}

-int i915_ppgtt_init(struct drm_device *dev, struct i915_hw_ppgtt *ppgtt)
+int i915_ppgtt_init(struct i915_hw_ppgtt *ppgtt,
+ struct drm_i915_private *dev_priv,
+ struct drm_i915_file_private *file_priv)
{
- struct drm_i915_private *dev_priv = dev->dev_private;
- int ret = 0;
+ int ret;

- ret = __hw_ppgtt_init(dev, ppgtt);
+ ret = __hw_ppgtt_init(ppgtt, dev_priv);
if (ret == 0) {
kref_init(&ppgtt->ref);
i915_address_space_init(&ppgtt->base, dev_priv);
+ ppgtt->base.file = file_priv;
}

return ret;
@@ -2183,7 +2186,8 @@ int i915_ppgtt_init_ring(struct drm_i915_gem_request *req)
}

struct i915_hw_ppgtt *
-i915_ppgtt_create(struct drm_device *dev, struct drm_i915_file_private *fpriv)
+i915_ppgtt_create(struct drm_i915_private *dev_priv,
+ struct drm_i915_file_private *fpriv)
{
struct i915_hw_ppgtt *ppgtt;
int ret;
@@ -2192,14 +2196,12 @@ i915_ppgtt_create(struct drm_device *dev, struct drm_i915_file_private *fpriv)
if (!ppgtt)
return ERR_PTR(-ENOMEM);

- ret = i915_ppgtt_init(dev, ppgtt);
+ ret = i915_ppgtt_init(ppgtt, dev_priv, fpriv);
if (ret) {
kfree(ppgtt);
return ERR_PTR(ret);
}

- ppgtt->file_priv = fpriv;
-
trace_i915_ppgtt_create(&ppgtt->base);

return ppgtt;
@@ -2724,7 +2726,7 @@ int i915_global_gtt_setup(struct drm_device *dev)
if (!ppgtt)
return -ENOMEM;

- ret = __hw_ppgtt_init(dev, ppgtt);
+ ret = __hw_ppgtt_init(ppgtt, dev_priv);
if (ret) {
ppgtt->base.cleanup(&ppgtt->base);
kfree(ppgtt);
@@ -3150,7 +3152,6 @@ int i915_gem_gtt_init(struct drm_device *dev)
}

gtt->base.dev = dev;
- gtt->base.is_ggtt = true;

ret = gtt->gtt_probe(dev, &gtt->base.total, &gtt->stolen_size,
&gtt->mappable_base, &gtt->mappable_end);
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 633b9b2e1acb..9d3984602d34 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -273,12 +273,19 @@ struct i915_pml4 {
struct i915_address_space {
struct drm_mm mm;
struct drm_device *dev;
+ /* Every address space belongs to a struct file - except for the global
+ * GTT that is owned by the driver (and so @file is set to NULL). In
+ * principle, no information should leak from one context to another
+ * (or between files/processes etc) unless explicitly shared by the
+ * owner. Tracking the owner is important in order to free up per-file
+ * objects along with the file, to aide resource tracking, and to
+ * assign blame.
+ */
+ struct drm_i915_file_private *file;
struct list_head global_link;
u64 start; /* Start offset always 0 for dri2 */
u64 total; /* size addr space maps (ex. 2GB for ggtt) */

- bool is_ggtt;
-
struct i915_page_scratch *scratch_page;
struct i915_page_table *scratch_pt;
struct i915_page_directory *scratch_pd;
@@ -334,7 +341,7 @@ struct i915_address_space {
u32 flags);
};

-#define i915_is_ggtt(V) ((V)->is_ggtt)
+#define i915_is_ggtt(V) ((V)->file == NULL)

/* The Graphics Translation Table is the way in which GEN hardware translates a
* Graphics Virtual Address into a Physical Address. In addition to the normal
@@ -376,8 +383,6 @@ struct i915_hw_ppgtt {
struct i915_page_directory pd; /* GEN6-7 */
};

- struct drm_i915_file_private *file_priv;
-
gen6_pte_t __iomem *pd_addr;

int (*enable)(struct i915_hw_ppgtt *ppgtt);
@@ -523,11 +528,13 @@ int i915_global_gtt_setup(struct drm_device *dev);
void i915_global_gtt_cleanup(struct drm_device *dev);


-int i915_ppgtt_init(struct drm_device *dev, struct i915_hw_ppgtt *ppgtt);
+int i915_ppgtt_init(struct i915_hw_ppgtt *ppgtt,
+ struct drm_i915_private *dev_priv,
+ struct drm_i915_file_private *file_priv);
int i915_ppgtt_init_hw(struct drm_device *dev);
int i915_ppgtt_init_ring(struct drm_i915_gem_request *req);
void i915_ppgtt_release(struct kref *kref);
-struct i915_hw_ppgtt *i915_ppgtt_create(struct drm_device *dev,
+struct i915_hw_ppgtt *i915_ppgtt_create(struct drm_i915_private *dev_priv,
struct drm_i915_file_private *fpriv);
static inline void i915_ppgtt_get(struct i915_hw_ppgtt *ppgtt)
{
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:13 UTC
Permalink
Using intel_ring_* to refer to the intel_engine_cs functions is most
confusing!

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 10 +++----
drivers/gpu/drm/i915/i915_dma.c | 8 +++---
drivers/gpu/drm/i915/i915_drv.h | 4 +--
drivers/gpu/drm/i915/i915_gem.c | 22 +++++++-------
drivers/gpu/drm/i915/i915_gem_context.c | 8 +++---
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 6 ++--
drivers/gpu/drm/i915/i915_gem_request.c | 8 +++---
drivers/gpu/drm/i915/i915_gem_request.h | 4 +--
drivers/gpu/drm/i915/i915_gpu_error.c | 8 +++---
drivers/gpu/drm/i915/i915_guc_submission.c | 6 ++--
drivers/gpu/drm/i915/i915_irq.c | 18 ++++++------
drivers/gpu/drm/i915/i915_trace.h | 2 +-
drivers/gpu/drm/i915/intel_breadcrumbs.c | 4 +--
drivers/gpu/drm/i915/intel_lrc.c | 17 +++++------
drivers/gpu/drm/i915/intel_mocs.c | 6 ++--
drivers/gpu/drm/i915/intel_ringbuffer.c | 46 ++++++++++++++----------------
drivers/gpu/drm/i915/intel_ringbuffer.h | 36 +++++++++++------------
17 files changed, 104 insertions(+), 109 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 6e91726db8d3..dec10784c2bc 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -599,7 +599,7 @@ static int i915_gem_pageflip_info(struct seq_file *m, void *data)
engine->name,
i915_gem_request_get_seqno(work->flip_queued_req),
dev_priv->next_seqno,
- intel_ring_get_seqno(engine),
+ intel_engine_get_seqno(engine),
i915_gem_request_completed(work->flip_queued_req));
} else
seq_printf(m, "Flip not associated with any ring\n");
@@ -732,7 +732,7 @@ static void i915_ring_seqno_info(struct seq_file *m,
struct rb_node *rb;

seq_printf(m, "Current sequence (%s): %x\n",
- ring->name, intel_ring_get_seqno(ring));
+ ring->name, intel_engine_get_seqno(ring));

seq_printf(m, "Current user interrupts (%s): %x\n",
ring->name, READ_ONCE(ring->user_interrupts));
@@ -1354,8 +1354,8 @@ static int i915_hangcheck_info(struct seq_file *m, void *unused)
intel_runtime_pm_get(dev_priv);

for_each_ring(ring, dev_priv, i) {
- acthd[i] = intel_ring_get_active_head(ring);
- seqno[i] = intel_ring_get_seqno(ring);
+ acthd[i] = intel_engine_get_active_head(ring);
+ seqno[i] = intel_engine_get_seqno(ring);
}

i915_get_extra_instdone(dev, instdone);
@@ -2496,7 +2496,7 @@ static int i915_guc_info(struct seq_file *m, void *data)
struct intel_guc guc;
struct i915_guc_client client = {};
struct intel_engine_cs *ring;
- enum intel_ring_id i;
+ enum intel_engine_id i;
u64 total = 0;

if (!HAS_GUC_SCHED(dev_priv->dev))
diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
index 4c72c83cfa28..c0242ce45e43 100644
--- a/drivers/gpu/drm/i915/i915_dma.c
+++ b/drivers/gpu/drm/i915/i915_dma.c
@@ -87,16 +87,16 @@ static int i915_getparam(struct drm_device *dev, void *data,
value = 1;
break;
case I915_PARAM_HAS_BSD:
- value = intel_ring_initialized(&dev_priv->ring[VCS]);
+ value = intel_engine_initialized(&dev_priv->ring[VCS]);
break;
case I915_PARAM_HAS_BLT:
- value = intel_ring_initialized(&dev_priv->ring[BCS]);
+ value = intel_engine_initialized(&dev_priv->ring[BCS]);
break;
case I915_PARAM_HAS_VEBOX:
- value = intel_ring_initialized(&dev_priv->ring[VECS]);
+ value = intel_engine_initialized(&dev_priv->ring[VECS]);
break;
case I915_PARAM_HAS_BSD2:
- value = intel_ring_initialized(&dev_priv->ring[VCS2]);
+ value = intel_engine_initialized(&dev_priv->ring[VCS2]);
break;
case I915_PARAM_HAS_RELAXED_FENCING:
value = 1;
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 9f06dd19bfb2..466adc6617f0 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -520,7 +520,7 @@ struct drm_i915_error_state {
/* Software tracked state */
bool waiting;
int hangcheck_score;
- enum intel_ring_hangcheck_action hangcheck_action;
+ enum intel_engine_hangcheck_action hangcheck_action;
int num_requests;

/* our own tracking of ring head and tail */
@@ -1973,7 +1973,7 @@ static inline struct drm_i915_private *guc_to_i915(struct intel_guc *guc)
/* Iterate over initialised rings */
#define for_each_ring(ring__, dev_priv__, i__) \
for ((i__) = 0; (i__) < I915_NUM_RINGS; (i__)++) \
- for_each_if ((((ring__) = &(dev_priv__)->ring[(i__)]), intel_ring_initialized((ring__))))
+ for_each_if ((((ring__) = &(dev_priv__)->ring[(i__)]), intel_engine_initialized((ring__))))

enum hdmi_force_audio {
HDMI_AUDIO_OFF_DVI = -2, /* no aux data for HDMI-DVI converter */
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 430c439ece26..a81cad666d3a 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2067,7 +2067,7 @@ void i915_vma_move_to_active(struct i915_vma *vma,
/* Add a reference if we're newly entering the active list. */
if (obj->active == 0)
drm_gem_object_reference(&obj->base);
- obj->active |= intel_ring_flag(engine);
+ obj->active |= intel_engine_flag(engine);

list_move_tail(&obj->ring_list[engine->id], &engine->active_list);
i915_gem_request_assign(&obj->last_read_req[engine->id], req);
@@ -2079,7 +2079,7 @@ static void
i915_gem_object_retire__write(struct drm_i915_gem_object *obj)
{
GEM_BUG_ON(obj->last_write_req == NULL);
- GEM_BUG_ON(!(obj->active & intel_ring_flag(obj->last_write_req->engine)));
+ GEM_BUG_ON(!(obj->active & intel_engine_flag(obj->last_write_req->engine)));

i915_gem_request_assign(&obj->last_write_req, NULL);
intel_fb_obj_flush(obj, true, ORIGIN_CS);
@@ -2273,7 +2273,7 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
intel_ring_update_space(buffer);
}

- intel_ring_init_seqno(ring, ring->last_submitted_seqno);
+ intel_engine_init_seqno(ring, ring->last_submitted_seqno);
}

void i915_gem_reset(struct drm_device *dev)
@@ -2576,7 +2576,7 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,

i915_gem_object_retire_request(obj, from_req);
} else {
- int idx = intel_ring_sync_index(from, to);
+ int idx = intel_engine_sync_index(from, to);
u32 seqno = i915_gem_request_get_seqno(from_req);

WARN_ON(!to_req);
@@ -2794,7 +2794,7 @@ int i915_gpu_idle(struct drm_device *dev)
return ret;
}

- ret = intel_ring_idle(ring);
+ ret = intel_engine_idle(ring);
if (ret)
return ret;
}
@@ -4180,13 +4180,13 @@ int i915_gem_init_rings(struct drm_device *dev)
return 0;

cleanup_vebox_ring:
- intel_cleanup_ring_buffer(&dev_priv->ring[VECS]);
+ intel_engine_cleanup(&dev_priv->ring[VECS]);
cleanup_blt_ring:
- intel_cleanup_ring_buffer(&dev_priv->ring[BCS]);
+ intel_engine_cleanup(&dev_priv->ring[BCS]);
cleanup_bsd_ring:
- intel_cleanup_ring_buffer(&dev_priv->ring[VCS]);
+ intel_engine_cleanup(&dev_priv->ring[VCS]);
cleanup_render_ring:
- intel_cleanup_ring_buffer(&dev_priv->ring[RCS]);
+ intel_engine_cleanup(&dev_priv->ring[RCS]);

return ret;
}
@@ -4341,8 +4341,8 @@ int i915_gem_init(struct drm_device *dev)
if (!i915.enable_execlists) {
dev_priv->gt.execbuf_submit = i915_gem_ringbuffer_submission;
dev_priv->gt.init_rings = i915_gem_init_rings;
- dev_priv->gt.cleanup_ring = intel_cleanup_ring_buffer;
- dev_priv->gt.stop_ring = intel_stop_ring_buffer;
+ dev_priv->gt.cleanup_ring = intel_engine_cleanup;
+ dev_priv->gt.stop_ring = intel_engine_stop;
} else {
dev_priv->gt.execbuf_submit = intel_execlists_submission;
dev_priv->gt.init_rings = intel_logical_rings_init;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 5b4e77a80c19..ac2e205fe3b4 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -610,7 +610,7 @@ static inline bool should_skip_switch(struct intel_engine_cs *ring,
return false;

if (to->ppgtt && from == to &&
- !(intel_ring_flag(ring) & to->ppgtt->pd_dirty_rings))
+ !(intel_engine_flag(ring) & to->ppgtt->pd_dirty_rings))
return true;

return false;
@@ -691,7 +691,7 @@ static int do_switch(struct drm_i915_gem_request *req)
goto unpin_out;

/* Doing a PD load always reloads the page dirs */
- to->ppgtt->pd_dirty_rings &= ~intel_ring_flag(engine);
+ to->ppgtt->pd_dirty_rings &= ~intel_engine_flag(engine);
}

if (engine->id != RCS) {
@@ -719,9 +719,9 @@ static int do_switch(struct drm_i915_gem_request *req)
* space. This means we must enforce that a page table load
* occur when this occurs. */
} else if (to->ppgtt &&
- (intel_ring_flag(engine) & to->ppgtt->pd_dirty_rings)) {
+ (intel_engine_flag(engine) & to->ppgtt->pd_dirty_rings)) {
hw_flags |= MI_FORCE_RESTORE;
- to->ppgtt->pd_dirty_rings &= ~intel_ring_flag(engine);
+ to->ppgtt->pd_dirty_rings &= ~intel_engine_flag(engine);
}

/* We should never emit switch_mm more than once */
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index a0f5a997c2f2..b7c90072f7d4 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -942,7 +942,7 @@ static int
i915_gem_execbuffer_move_to_gpu(struct drm_i915_gem_request *req,
struct list_head *vmas)
{
- const unsigned other_rings = ~intel_ring_flag(req->engine);
+ const unsigned other_rings = ~intel_engine_flag(req->engine);
struct i915_vma *vma;
uint32_t flush_domains = 0;
bool flush_chipset = false;
@@ -972,7 +972,7 @@ i915_gem_execbuffer_move_to_gpu(struct drm_i915_gem_request *req,
/* Unconditionally invalidate gpu caches and ensure that we do flush
* any residual writes from the previous batch.
*/
- return intel_ring_invalidate_all_caches(req);
+ return intel_engine_invalidate_all_caches(req);
}

static bool
@@ -1443,7 +1443,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
} else
ring = &dev_priv->ring[(args->flags & I915_EXEC_RING_MASK) - 1];

- if (!intel_ring_initialized(ring)) {
+ if (!intel_engine_initialized(ring)) {
DRM_DEBUG("execbuf with invalid ring: %d\n",
(int)(args->flags & I915_EXEC_RING_MASK));
return -EINVAL;
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 4cc64d9cca12..54834ad1bf5e 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -90,7 +90,7 @@ static void i915_fence_timeline_value_str(struct fence *fence, char *str,
int size)
{
snprintf(str, size, "%u",
- intel_ring_get_seqno(to_i915_request(fence)->engine));
+ intel_engine_get_seqno(to_i915_request(fence)->engine));
}

static void i915_fence_release(struct fence *fence)
@@ -136,7 +136,7 @@ i915_gem_init_seqno(struct drm_i915_private *dev_priv, u32 seqno)

/* Carefully retire all requests without writing to the rings */
for_each_ring(ring, dev_priv, i) {
- ret = intel_ring_idle(ring);
+ ret = intel_engine_idle(ring);
if (ret)
return ret;
}
@@ -144,7 +144,7 @@ i915_gem_init_seqno(struct drm_i915_private *dev_priv, u32 seqno)

/* Finally reset hw state */
for_each_ring(ring, dev_priv, i) {
- intel_ring_init_seqno(ring, seqno);
+ intel_engine_init_seqno(ring, seqno);

for (j = 0; j < ARRAY_SIZE(ring->semaphore.sync_seqno); j++)
ring->semaphore.sync_seqno[j] = 0;
@@ -429,7 +429,7 @@ void __i915_add_request(struct drm_i915_gem_request *request,
if (i915.enable_execlists)
ret = logical_ring_flush_all_caches(request);
else
- ret = intel_ring_flush_all_caches(request);
+ ret = intel_engine_flush_all_caches(request);
/* Not allowed to fail! */
WARN(ret, "*_ring_flush_all_caches failed: %d!\n", ret);
}
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index bd17e3a9a71d..cd4412f6e7e3 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -198,13 +198,13 @@ i915_seqno_passed(uint32_t seq1, uint32_t seq2)

static inline bool i915_gem_request_started(struct drm_i915_gem_request *req)
{
- return i915_seqno_passed(intel_ring_get_seqno(req->engine),
+ return i915_seqno_passed(intel_engine_get_seqno(req->engine),
req->previous_seqno);
}

static inline bool i915_gem_request_completed(struct drm_i915_gem_request *req)
{
- return i915_seqno_passed(intel_ring_get_seqno(req->engine),
+ return i915_seqno_passed(intel_engine_get_seqno(req->engine),
req->fence.seqno);
}

diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index b47ca1b7041f..f27d6d1b64d6 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -221,7 +221,7 @@ static void print_error_buffers(struct drm_i915_error_state_buf *m,
}
}

-static const char *hangcheck_action_to_str(enum intel_ring_hangcheck_action a)
+static const char *hangcheck_action_to_str(enum intel_engine_hangcheck_action a)
{
switch (a) {
case HANGCHECK_IDLE:
@@ -841,7 +841,7 @@ static void gen8_record_semaphore_state(struct drm_i915_private *dev_priv,
signal_offset = (GEN8_SIGNAL_OFFSET(ring, i) & (PAGE_SIZE - 1))
/ 4;
tmp = error->semaphore_obj->pages[0];
- idx = intel_ring_sync_index(ring, to);
+ idx = intel_engine_sync_index(ring, to);

ering->semaphore_mboxes[idx] = tmp[signal_offset];
ering->semaphore_seqno[idx] = ring->semaphore.sync_seqno[idx];
@@ -901,8 +901,8 @@ static void i915_record_ring_state(struct drm_device *dev,

ering->waiting = intel_engine_has_waiter(ring);
ering->instpm = I915_READ(RING_INSTPM(ring->mmio_base));
- ering->acthd = intel_ring_get_active_head(ring);
- ering->seqno = intel_ring_get_seqno(ring);
+ ering->acthd = intel_engine_get_active_head(ring);
+ ering->seqno = intel_engine_get_seqno(ring);
ering->start = I915_READ_START(ring);
ering->head = I915_READ_HEAD(ring);
ering->tail = I915_READ_TAIL(ring);
diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
index b47e630e048a..39ccfa8934e3 100644
--- a/drivers/gpu/drm/i915/i915_guc_submission.c
+++ b/drivers/gpu/drm/i915/i915_guc_submission.c
@@ -510,7 +510,7 @@ int i915_guc_wq_check_space(struct i915_guc_client *gc)
static int guc_add_workqueue_item(struct i915_guc_client *gc,
struct drm_i915_gem_request *rq)
{
- enum intel_ring_id ring_id = rq->engine->id;
+ enum intel_engine_id ring_id = rq->engine->id;
struct guc_wq_item *wqi;
void *base;
u32 tail, wq_len, wq_off, space;
@@ -565,7 +565,7 @@ static int guc_add_workqueue_item(struct i915_guc_client *gc,
/* Update the ringbuffer pointer in a saved context image */
static void lr_context_update(struct drm_i915_gem_request *rq)
{
- enum intel_ring_id ring_id = rq->engine->id;
+ enum intel_engine_id ring_id = rq->engine->id;
struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[ring_id].state;
struct drm_i915_gem_object *rb_obj = rq->ring->obj;
struct page *page;
@@ -594,7 +594,7 @@ int i915_guc_submit(struct i915_guc_client *client,
struct drm_i915_gem_request *rq)
{
struct intel_guc *guc = client->guc;
- enum intel_ring_id ring_id = rq->engine->id;
+ enum intel_engine_id ring_id = rq->engine->id;
int q_ret, b_ret;

/* Need this because of the deferred pin ctx and ring */
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index ce52d7d9ad91..ce047ac84f5f 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -2896,7 +2896,7 @@ static int semaphore_passed(struct intel_engine_cs *ring)
if (signaller->hangcheck.deadlock >= I915_NUM_RINGS)
return -1;

- if (i915_seqno_passed(intel_ring_get_seqno(signaller), seqno))
+ if (i915_seqno_passed(intel_engine_get_seqno(signaller), seqno))
return 1;

/* cursory check for an unkickable deadlock */
@@ -2945,7 +2945,7 @@ static bool subunits_stuck(struct intel_engine_cs *ring)
return stuck;
}

-static enum intel_ring_hangcheck_action
+static enum intel_engine_hangcheck_action
head_stuck(struct intel_engine_cs *ring, u64 acthd)
{
if (acthd != ring->hangcheck.acthd) {
@@ -2968,12 +2968,12 @@ head_stuck(struct intel_engine_cs *ring, u64 acthd)
return HANGCHECK_HUNG;
}

-static enum intel_ring_hangcheck_action
-ring_stuck(struct intel_engine_cs *ring, u64 acthd)
+static enum intel_engine_hangcheck_action
+engine_stuck(struct intel_engine_cs *ring, u64 acthd)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- enum intel_ring_hangcheck_action ha;
+ enum intel_engine_hangcheck_action ha;
u32 tmp;

ha = head_stuck(ring, acthd);
@@ -3053,8 +3053,8 @@ static void i915_hangcheck_elapsed(struct work_struct *work)

semaphore_clear_deadlocks(dev_priv);

- acthd = intel_ring_get_active_head(ring);
- seqno = intel_ring_get_seqno(ring);
+ acthd = intel_engine_get_active_head(ring);
+ seqno = intel_engine_get_seqno(ring);
user_interrupts = READ_ONCE(ring->user_interrupts);

if (ring->hangcheck.seqno == seqno) {
@@ -3091,8 +3091,8 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
* being repeatedly kicked and so responsible
* for stalling the machine.
*/
- ring->hangcheck.action = ring_stuck(ring,
- acthd);
+ ring->hangcheck.action =
+ engine_stuck(ring, acthd);

switch (ring->hangcheck.action) {
case HANGCHECK_IDLE:
diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
index 0204ff72b3e4..95cab4776401 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -569,7 +569,7 @@ TRACE_EVENT(i915_gem_request_notify,
TP_fast_assign(
__entry->dev = ring->dev->primary->index;
__entry->ring = ring->id;
- __entry->seqno = intel_ring_get_seqno(ring);
+ __entry->seqno = intel_engine_get_seqno(ring);
),

TP_printk("dev=%u, ring=%u, seqno=%u",
diff --git a/drivers/gpu/drm/i915/intel_breadcrumbs.c b/drivers/gpu/drm/i915/intel_breadcrumbs.c
index 5ba8b4cd8a18..b9366e6ca5ad 100644
--- a/drivers/gpu/drm/i915/intel_breadcrumbs.c
+++ b/drivers/gpu/drm/i915/intel_breadcrumbs.c
@@ -141,7 +141,7 @@ bool intel_engine_add_wait(struct intel_engine_cs *engine,
struct intel_wait *wait)
{
struct intel_breadcrumbs *b = &engine->breadcrumbs;
- u32 seqno = intel_ring_get_seqno(engine);
+ u32 seqno = intel_engine_get_seqno(engine);
struct rb_node **p, *parent, *completed;
bool first;

@@ -283,7 +283,7 @@ void intel_engine_remove_wait(struct intel_engine_cs *engine,
* the first_waiter. This is undesirable if that
* waiter is a high priority task.
*/
- u32 seqno = intel_ring_get_seqno(engine);
+ u32 seqno = intel_engine_get_seqno(engine);
while (i915_seqno_passed(seqno,
to_wait(next)->seqno)) {
struct rb_node *n = rb_next(next);
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 65beb7267d1a..92ae7bc532ed 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -637,7 +637,7 @@ static int logical_ring_invalidate_all_caches(struct drm_i915_gem_request *req)
static int execlists_move_to_gpu(struct drm_i915_gem_request *req,
struct list_head *vmas)
{
- const unsigned other_rings = ~intel_ring_flag(req->engine);
+ const unsigned other_rings = ~intel_engine_flag(req->engine);
struct i915_vma *vma;
uint32_t flush_domains = 0;
bool flush_chipset = false;
@@ -843,10 +843,10 @@ void intel_logical_ring_stop(struct intel_engine_cs *ring)
struct drm_i915_private *dev_priv = ring->dev->dev_private;
int ret;

- if (!intel_ring_initialized(ring))
+ if (!intel_engine_initialized(ring))
return;

- ret = intel_ring_idle(ring);
+ ret = intel_engine_idle(ring);
if (ret)
DRM_ERROR("failed to quiesce %s whilst cleaning up: %d\n",
ring->name, ret);
@@ -1455,7 +1455,7 @@ static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
* not idle). PML4 is allocated during ppgtt init so this is
* not needed in 48-bit.*/
if (req->ctx->ppgtt &&
- (intel_ring_flag(req->engine) & req->ctx->ppgtt->pd_dirty_rings)) {
+ (intel_engine_flag(req->engine) & req->ctx->ppgtt->pd_dirty_rings)) {
if (!USES_FULL_48BIT_PPGTT(req->i915) &&
!intel_vgpu_active(req->i915->dev)) {
ret = intel_logical_ring_emit_pdps(req);
@@ -1463,7 +1463,7 @@ static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
return ret;
}

- req->ctx->ppgtt->pd_dirty_rings &= ~intel_ring_flag(req->engine);
+ req->ctx->ppgtt->pd_dirty_rings &= ~intel_engine_flag(req->engine);
}

ret = intel_ring_begin(req, 4);
@@ -1714,14 +1714,11 @@ static int gen8_init_rcs_context(struct drm_i915_gem_request *req)
*/
void intel_logical_ring_cleanup(struct intel_engine_cs *ring)
{
- struct drm_i915_private *dev_priv;
-
- if (!intel_ring_initialized(ring))
+ if (!intel_engine_initialized(ring))
return;

- dev_priv = ring->dev->dev_private;
-
if (ring->buffer) {
+ struct drm_i915_private *dev_priv = ring->i915;
intel_logical_ring_stop(ring);
WARN_ON((I915_READ_MODE(ring) & MODE_IDLE) == 0);
}
diff --git a/drivers/gpu/drm/i915/intel_mocs.c b/drivers/gpu/drm/i915/intel_mocs.c
index 039c7405f640..61e1704d7313 100644
--- a/drivers/gpu/drm/i915/intel_mocs.c
+++ b/drivers/gpu/drm/i915/intel_mocs.c
@@ -159,7 +159,7 @@ static bool get_mocs_settings(struct drm_i915_private *dev_priv,
return result;
}

-static i915_reg_t mocs_register(enum intel_ring_id ring, int index)
+static i915_reg_t mocs_register(enum intel_engine_id ring, int index)
{
switch (ring) {
case RCS:
@@ -191,7 +191,7 @@ static i915_reg_t mocs_register(enum intel_ring_id ring, int index)
*/
static int emit_mocs_control_table(struct drm_i915_gem_request *req,
const struct drm_i915_mocs_table *table,
- enum intel_ring_id id)
+ enum intel_engine_id id)
{
struct intel_ringbuffer *ring = req->ring;
unsigned int index;
@@ -318,7 +318,7 @@ int intel_rcs_context_init_mocs(struct drm_i915_gem_request *req)

if (get_mocs_settings(req->i915, &t)) {
struct intel_engine_cs *ring;
- enum intel_ring_id ring_id;
+ enum intel_engine_id ring_id;

/* Program the control registers */
for_each_ring(ring, req->i915, ring_id) {
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index c437b61ac1d0..1bb9f376aa0b 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -425,16 +425,16 @@ static void ring_write_tail(struct intel_engine_cs *ring,
I915_WRITE_TAIL(ring, value);
}

-u64 intel_ring_get_active_head(struct intel_engine_cs *ring)
+u64 intel_engine_get_active_head(struct intel_engine_cs *engine)
{
- struct drm_i915_private *dev_priv = ring->dev->dev_private;
+ struct drm_i915_private *dev_priv = engine->i915;
u64 acthd;

- if (INTEL_INFO(ring->dev)->gen >= 8)
- acthd = I915_READ64_2x32(RING_ACTHD(ring->mmio_base),
- RING_ACTHD_UDW(ring->mmio_base));
- else if (INTEL_INFO(ring->dev)->gen >= 4)
- acthd = I915_READ(RING_ACTHD(ring->mmio_base));
+ if (INTEL_INFO(dev_priv)->gen >= 8)
+ acthd = I915_READ64_2x32(RING_ACTHD(engine->mmio_base),
+ RING_ACTHD_UDW(engine->mmio_base));
+ else if (INTEL_INFO(dev_priv)->gen >= 4)
+ acthd = I915_READ(RING_ACTHD(engine->mmio_base));
else
acthd = I915_READ(ACTHD);

@@ -697,7 +697,7 @@ static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
return 0;

req->engine->gpu_caches_dirty = true;
- ret = intel_ring_flush_all_caches(req);
+ ret = intel_engine_flush_all_caches(req);
if (ret)
return ret;

@@ -715,7 +715,7 @@ static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
intel_ring_advance(ring);

req->engine->gpu_caches_dirty = true;
- ret = intel_ring_flush_all_caches(req);
+ ret = intel_engine_flush_all_caches(req);
if (ret)
return ret;

@@ -2028,21 +2028,19 @@ static int intel_init_engine(struct drm_device *dev,
return 0;

error:
- intel_cleanup_ring_buffer(engine);
+ intel_engine_cleanup(engine);
return ret;
}

-void intel_cleanup_ring_buffer(struct intel_engine_cs *ring)
+void intel_engine_cleanup(struct intel_engine_cs *ring)
{
- struct drm_i915_private *dev_priv;
-
- if (!intel_ring_initialized(ring))
+ if (!intel_engine_initialized(ring))
return;

- dev_priv = to_i915(ring->dev);
-
if (ring->buffer) {
- intel_stop_ring_buffer(ring);
+ struct drm_i915_private *dev_priv = ring->i915;
+
+ intel_engine_stop(ring);
WARN_ON(!IS_GEN2(ring->dev) && (I915_READ_MODE(ring) & MODE_IDLE) == 0);

intel_unpin_ringbuffer_obj(ring->buffer);
@@ -2062,7 +2060,7 @@ void intel_cleanup_ring_buffer(struct intel_engine_cs *ring)
ring->dev = NULL;
}

-int intel_ring_idle(struct intel_engine_cs *ring)
+int intel_engine_idle(struct intel_engine_cs *ring)
{
struct drm_i915_gem_request *req;

@@ -2265,7 +2263,7 @@ int intel_ring_cacheline_align(struct drm_i915_gem_request *req)
return 0;
}

-void intel_ring_init_seqno(struct intel_engine_cs *ring, u32 seqno)
+void intel_engine_init_seqno(struct intel_engine_cs *ring, u32 seqno)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
@@ -2834,7 +2832,7 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
}

int
-intel_ring_flush_all_caches(struct drm_i915_gem_request *req)
+intel_engine_flush_all_caches(struct drm_i915_gem_request *req)
{
struct intel_engine_cs *engine = req->engine;
int ret;
@@ -2853,7 +2851,7 @@ intel_ring_flush_all_caches(struct drm_i915_gem_request *req)
}

int
-intel_ring_invalidate_all_caches(struct drm_i915_gem_request *req)
+intel_engine_invalidate_all_caches(struct drm_i915_gem_request *req)
{
struct intel_engine_cs *engine = req->engine;
uint32_t flush_domains;
@@ -2874,14 +2872,14 @@ intel_ring_invalidate_all_caches(struct drm_i915_gem_request *req)
}

void
-intel_stop_ring_buffer(struct intel_engine_cs *ring)
+intel_engine_stop(struct intel_engine_cs *ring)
{
int ret;

- if (!intel_ring_initialized(ring))
+ if (!intel_engine_initialized(ring))
return;

- ret = intel_ring_idle(ring);
+ ret = intel_engine_idle(ring);
if (ret)
DRM_ERROR("failed to quiesce %s whilst cleaning up: %d\n",
ring->name, ret);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 6bd9b356c95d..6803e4820688 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -75,7 +75,7 @@ struct intel_hw_status_page {
ring->semaphore.signal_ggtt[ring->id] = MI_SEMAPHORE_SYNC_INVALID; \
} while(0)

-enum intel_ring_hangcheck_action {
+enum intel_engine_hangcheck_action {
HANGCHECK_IDLE = 0,
HANGCHECK_WAIT,
HANGCHECK_ACTIVE,
@@ -86,13 +86,13 @@ enum intel_ring_hangcheck_action {

#define HANGCHECK_SCORE_RING_HUNG 31

-struct intel_ring_hangcheck {
+struct intel_engine_hangcheck {
u64 acthd;
u64 max_acthd;
u32 seqno;
unsigned user_interrupts;
int score;
- enum intel_ring_hangcheck_action action;
+ enum intel_engine_hangcheck_action action;
int deadlock;
u32 instdone[I915_NUM_INSTDONE_REG];
};
@@ -148,9 +148,9 @@ struct i915_ctx_workarounds {

struct drm_i915_gem_request;

-struct intel_engine_cs {
+struct intel_engine_cs {
const char *name;
- enum intel_ring_id {
+ enum intel_engine_id {
RCS = 0x0,
VCS,
BCS,
@@ -337,7 +337,7 @@ struct intel_engine_cs {
struct intel_context *default_context;
struct intel_context *last_context;

- struct intel_ring_hangcheck hangcheck;
+ struct intel_engine_hangcheck hangcheck;

struct {
struct drm_i915_gem_object *obj;
@@ -380,20 +380,20 @@ struct intel_engine_cs {
};

static inline bool
-intel_ring_initialized(struct intel_engine_cs *ring)
+intel_engine_initialized(struct intel_engine_cs *ring)
{
return ring->dev != NULL;
}

static inline unsigned
-intel_ring_flag(struct intel_engine_cs *ring)
+intel_engine_flag(struct intel_engine_cs *ring)
{
return 1 << ring->id;
}

static inline u32
-intel_ring_sync_index(struct intel_engine_cs *ring,
- struct intel_engine_cs *other)
+intel_engine_sync_index(struct intel_engine_cs *ring,
+ struct intel_engine_cs *other)
{
int idx;

@@ -461,8 +461,8 @@ int intel_pin_and_map_ringbuffer_obj(struct drm_device *dev,
void intel_unpin_ringbuffer_obj(struct intel_ringbuffer *ringbuf);
void intel_ringbuffer_free(struct intel_ringbuffer *ring);

-void intel_stop_ring_buffer(struct intel_engine_cs *ring);
-void intel_cleanup_ring_buffer(struct intel_engine_cs *ring);
+void intel_engine_stop(struct intel_engine_cs *ring);
+void intel_engine_cleanup(struct intel_engine_cs *ring);

int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request);

@@ -487,10 +487,10 @@ int __intel_ring_space(int head, int tail, int size);
void intel_ring_update_space(struct intel_ringbuffer *ringbuf);
int intel_ring_space(struct intel_ringbuffer *ringbuf);

-int __must_check intel_ring_idle(struct intel_engine_cs *ring);
-void intel_ring_init_seqno(struct intel_engine_cs *ring, u32 seqno);
-int intel_ring_flush_all_caches(struct drm_i915_gem_request *req);
-int intel_ring_invalidate_all_caches(struct drm_i915_gem_request *req);
+int __must_check intel_engine_idle(struct intel_engine_cs *ring);
+void intel_engine_init_seqno(struct intel_engine_cs *ring, u32 seqno);
+int intel_engine_flush_all_caches(struct drm_i915_gem_request *req);
+int intel_engine_invalidate_all_caches(struct drm_i915_gem_request *req);

void intel_fini_pipe_control(struct intel_engine_cs *ring);
int intel_init_pipe_control(struct intel_engine_cs *ring);
@@ -501,8 +501,8 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev);
int intel_init_blt_ring_buffer(struct drm_device *dev);
int intel_init_vebox_ring_buffer(struct drm_device *dev);

-u64 intel_ring_get_active_head(struct intel_engine_cs *ring);
-static inline u32 intel_ring_get_seqno(struct intel_engine_cs *ring)
+u64 intel_engine_get_active_head(struct intel_engine_cs *ring);
+static inline u32 intel_engine_get_seqno(struct intel_engine_cs *ring)
{
return intel_read_status_page(ring, I915_GEM_HWS_INDEX);
}
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:36 UTC
Permalink
In legacy mode, we use the gen6 seqno barrier to insert a delay after
the interrupt before reading the seqno (as the seqno write is not
flushed before the interrupt is sent, the interrupt arrives before the
seqno is visible). Execlists ignored the evidence of igt.

Note that is harder, but not impossible, to reproduce the missed
interrupt syndrome with execlists. This is primarily because execlists
itself being interrupt driven helps mask the issue.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/intel_lrc.c | 39 +++++++++++++++++++++------------------
1 file changed, 21 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index ad51b1fc37cd..27d91f1ceb2b 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1775,18 +1775,24 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
return 0;
}

-static void bxt_seqno_barrier(struct intel_engine_cs *ring)
+static void
+gen6_seqno_barrier(struct intel_engine_cs *ring)
{
- /*
- * On BXT A steppings there is a HW coherency issue whereby the
- * MI_STORE_DATA_IMM storing the completed request's seqno
- * occasionally doesn't invalidate the CPU cache. Work around this by
- * clflushing the corresponding cacheline whenever the caller wants
- * the coherency to be guaranteed. Note that this cacheline is known
- * to be clean at this point, since we only write it in
- * bxt_a_set_seqno(), where we also do a clflush after the write. So
- * this clflush in practice becomes an invalidate operation.
+ /* Workaround to force correct ordering between irq and seqno writes on
+ * ivb (and maybe also on snb) by reading from a CS register (like
+ * ACTHD) before reading the status page.
+ *
+ * Note that this effectively effectively stalls the read by the time
+ * it takes to do a memory transaction, which more or less ensures
+ * that the write from the GPU has sufficient time to invalidate
+ * the CPU cacheline. Alternatively we could delay the interrupt from
+ * the CS ring to give the write time to land, but that would incur
+ * a delay after every batch i.e. much more frequent than a delay
+ * when waiting for the interrupt (with the same net latency).
*/
+ struct drm_i915_private *dev_priv = ring->i915;
+ POSTING_READ_FW(RING_ACTHD(ring->mmio_base));
+
intel_flush_status_page(ring, I915_GEM_HWS_INDEX);
}

@@ -1984,8 +1990,7 @@ static int logical_render_ring_init(struct drm_device *dev)
ring->init_hw = gen8_init_render_ring;
ring->init_context = gen8_init_rcs_context;
ring->cleanup = intel_fini_pipe_control;
- if (IS_BXT_REVID(dev, 0, BXT_REVID_A1))
- ring->irq_seqno_barrier = bxt_seqno_barrier;
+ ring->irq_seqno_barrier = gen6_seqno_barrier;
ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush_render;
ring->irq_get = gen8_logical_ring_get_irq;
@@ -2031,8 +2036,7 @@ static int logical_bsd_ring_init(struct drm_device *dev)
GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS1_IRQ_SHIFT;

ring->init_hw = gen8_init_common_ring;
- if (IS_BXT_REVID(dev, 0, BXT_REVID_A1))
- ring->irq_seqno_barrier = bxt_seqno_barrier;
+ ring->irq_seqno_barrier = gen6_seqno_barrier;
ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush;
ring->irq_get = gen8_logical_ring_get_irq;
@@ -2056,6 +2060,7 @@ static int logical_bsd2_ring_init(struct drm_device *dev)
GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VCS2_IRQ_SHIFT;

ring->init_hw = gen8_init_common_ring;
+ ring->irq_seqno_barrier = gen6_seqno_barrier;
ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush;
ring->irq_get = gen8_logical_ring_get_irq;
@@ -2079,8 +2084,7 @@ static int logical_blt_ring_init(struct drm_device *dev)
GT_CONTEXT_SWITCH_INTERRUPT << GEN8_BCS_IRQ_SHIFT;

ring->init_hw = gen8_init_common_ring;
- if (IS_BXT_REVID(dev, 0, BXT_REVID_A1))
- ring->irq_seqno_barrier = bxt_seqno_barrier;
+ ring->irq_seqno_barrier = gen6_seqno_barrier;
ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush;
ring->irq_get = gen8_logical_ring_get_irq;
@@ -2104,8 +2108,7 @@ static int logical_vebox_ring_init(struct drm_device *dev)
GT_CONTEXT_SWITCH_INTERRUPT << GEN8_VECS_IRQ_SHIFT;

ring->init_hw = gen8_init_common_ring;
- if (IS_BXT_REVID(dev, 0, BXT_REVID_A1))
- ring->irq_seqno_barrier = bxt_seqno_barrier;
+ ring->irq_seqno_barrier = gen6_seqno_barrier;
ring->emit_request = gen8_emit_request;
ring->emit_flush = gen8_emit_flush;
ring->irq_get = gen8_logical_ring_get_irq;
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:20 UTC
Permalink
Now that we use the same vfuncs for emitting the batch buffer in both
execlists and legacy, the golden render state initialisation is
identical between both.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem_render_state.c | 22 ++++++++++++------
drivers/gpu/drm/i915/i915_gem_render_state.h | 18 ---------------
drivers/gpu/drm/i915/intel_lrc.c | 34 +---------------------------
drivers/gpu/drm/i915/intel_renderstate.h | 16 +++++++++----
4 files changed, 27 insertions(+), 63 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_render_state.c b/drivers/gpu/drm/i915/i915_gem_render_state.c
index ccc988c2b226..222f25777bb4 100644
--- a/drivers/gpu/drm/i915/i915_gem_render_state.c
+++ b/drivers/gpu/drm/i915/i915_gem_render_state.c
@@ -28,6 +28,15 @@
#include "i915_drv.h"
#include "intel_renderstate.h"

+struct render_state {
+ const struct intel_renderstate_rodata *rodata;
+ struct drm_i915_gem_object *obj;
+ u64 ggtt_offset;
+ int gen;
+ u32 aux_batch_size;
+ u32 aux_batch_offset;
+};
+
static const struct intel_renderstate_rodata *
render_state_get_rodata(struct drm_device *dev, const int gen)
{
@@ -163,14 +172,14 @@ err_out:

#undef OUT_BATCH

-void i915_gem_render_state_fini(struct render_state *so)
+static void render_state_fini(struct render_state *so)
{
i915_gem_object_ggtt_unpin(so->obj);
drm_gem_object_unreference(&so->obj->base);
}

-int i915_gem_render_state_prepare(struct intel_engine_cs *ring,
- struct render_state *so)
+static int render_state_prepare(struct intel_engine_cs *ring,
+ struct render_state *so)
{
int ret;

@@ -186,7 +195,7 @@ int i915_gem_render_state_prepare(struct intel_engine_cs *ring,

ret = render_state_setup(so);
if (ret) {
- i915_gem_render_state_fini(so);
+ render_state_fini(so);
return ret;
}

@@ -198,7 +207,7 @@ int i915_gem_render_state_init(struct drm_i915_gem_request *req)
struct render_state so;
int ret;

- ret = i915_gem_render_state_prepare(req->engine, &so);
+ ret = render_state_prepare(req->engine, &so);
if (ret)
return ret;

@@ -222,8 +231,7 @@ int i915_gem_render_state_init(struct drm_i915_gem_request *req)
}

i915_vma_move_to_active(i915_gem_obj_to_ggtt(so.obj), req);
-
out:
- i915_gem_render_state_fini(&so);
+ render_state_fini(&so);
return ret;
}
diff --git a/drivers/gpu/drm/i915/i915_gem_render_state.h b/drivers/gpu/drm/i915/i915_gem_render_state.h
index e641bb093a90..c44fca8599bb 100644
--- a/drivers/gpu/drm/i915/i915_gem_render_state.h
+++ b/drivers/gpu/drm/i915/i915_gem_render_state.h
@@ -26,24 +26,6 @@

#include <linux/types.h>

-struct intel_renderstate_rodata {
- const u32 *reloc;
- const u32 *batch;
- const u32 batch_items;
-};
-
-struct render_state {
- const struct intel_renderstate_rodata *rodata;
- struct drm_i915_gem_object *obj;
- u64 ggtt_offset;
- int gen;
- u32 aux_batch_size;
- u32 aux_batch_offset;
-};
-
int i915_gem_render_state_init(struct drm_i915_gem_request *req);
-void i915_gem_render_state_fini(struct render_state *so);
-int i915_gem_render_state_prepare(struct intel_engine_cs *ring,
- struct render_state *so);

#endif /* _I915_GEM_RENDER_STATE_H_ */
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 9838503fafca..2f92c43397eb 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -1627,38 +1627,6 @@ static int gen8_add_request(struct drm_i915_gem_request *request)
return 0;
}

-static int intel_lr_context_render_state_init(struct drm_i915_gem_request *req)
-{
- struct render_state so;
- int ret;
-
- ret = i915_gem_render_state_prepare(req->engine, &so);
- if (ret)
- return ret;
-
- if (so.rodata == NULL)
- return 0;
-
- ret = req->engine->emit_bb_start(req, so.ggtt_offset,
- so.rodata->batch_items * 4,
- I915_DISPATCH_SECURE);
- if (ret)
- goto out;
-
- ret = req->engine->emit_bb_start(req,
- (so.ggtt_offset + so.aux_batch_offset),
- so.aux_batch_size,
- I915_DISPATCH_SECURE);
- if (ret)
- goto out;
-
- i915_vma_move_to_active(i915_gem_obj_to_ggtt(so.obj), req);
-
-out:
- i915_gem_render_state_fini(&so);
- return ret;
-}
-
static int gen8_init_rcs_context(struct drm_i915_gem_request *req)
{
int ret;
@@ -1675,7 +1643,7 @@ static int gen8_init_rcs_context(struct drm_i915_gem_request *req)
if (ret)
DRM_ERROR("MOCS failed to program: expect performance issues.\n");

- return intel_lr_context_render_state_init(req);
+ return i915_gem_render_state_init(req);
}

/**
diff --git a/drivers/gpu/drm/i915/intel_renderstate.h b/drivers/gpu/drm/i915/intel_renderstate.h
index 5bd69852752c..08f6fea05a2c 100644
--- a/drivers/gpu/drm/i915/intel_renderstate.h
+++ b/drivers/gpu/drm/i915/intel_renderstate.h
@@ -24,12 +24,13 @@
#ifndef _INTEL_RENDERSTATE_H
#define _INTEL_RENDERSTATE_H

-#include "i915_drv.h"
+#include <linux/types.h>

-extern const struct intel_renderstate_rodata gen6_null_state;
-extern const struct intel_renderstate_rodata gen7_null_state;
-extern const struct intel_renderstate_rodata gen8_null_state;
-extern const struct intel_renderstate_rodata gen9_null_state;
+struct intel_renderstate_rodata {
+ const u32 *reloc;
+ const u32 *batch;
+ const u32 batch_items;
+};

#define RO_RENDERSTATE(_g) \
const struct intel_renderstate_rodata gen ## _g ## _null_state = { \
@@ -38,4 +39,9 @@ extern const struct intel_renderstate_rodata gen9_null_state;
.batch_items = sizeof(gen ## _g ## _null_state_batch)/4, \
}

+extern const struct intel_renderstate_rodata gen6_null_state;
+extern const struct intel_renderstate_rodata gen7_null_state;
+extern const struct intel_renderstate_rodata gen8_null_state;
+extern const struct intel_renderstate_rodata gen9_null_state;
+
#endif /* INTEL_RENDERSTATE_H */
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:30 UTC
Permalink
The multiple levels of indirect do nothing but hinder the compiler and
the pointer chasing turns to be quite painful but painless to fix.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 13 ++++++-------
drivers/gpu/drm/i915/i915_drv.h | 7 -------
drivers/gpu/drm/i915/i915_gem.c | 18 +++++++-----------
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 5 ++---
drivers/gpu/drm/i915/i915_gem_gtt.c | 12 +++++-------
drivers/gpu/drm/i915/i915_gem_gtt.h | 5 +++++
drivers/gpu/drm/i915/i915_trace.h | 27 ++++++++-------------------
7 files changed, 33 insertions(+), 54 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index dd1788c81b90..99a6181b012e 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -118,7 +118,7 @@ static u64 i915_gem_obj_total_ggtt_size(struct drm_i915_gem_object *obj)
struct i915_vma *vma;

list_for_each_entry(vma, &obj->vma_list, obj_link) {
- if (i915_is_ggtt(vma->vm) && drm_mm_node_allocated(&vma->node))
+ if (vma->is_ggtt && drm_mm_node_allocated(&vma->node))
size += vma->node.size;
}

@@ -165,12 +165,11 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
seq_printf(m, " (fence: %d)", obj->fence_reg);
list_for_each_entry(vma, &obj->vma_list, obj_link) {
seq_printf(m, " (%sgtt offset: %08llx, size: %08llx",
- i915_is_ggtt(vma->vm) ? "g" : "pp",
+ vma->is_ggtt ? "g" : "pp",
vma->node.start, vma->node.size);
- if (i915_is_ggtt(vma->vm))
- seq_printf(m, ", type: %u)", vma->ggtt_view.type);
- else
- seq_puts(m, ")");
+ if (vma->is_ggtt)
+ seq_printf(m, ", type: %u", vma->ggtt_view.type);
+ seq_puts(m, ")");
}
if (obj->stolen)
seq_printf(m, " (stolen: %08llx)", obj->stolen->start);
@@ -346,7 +345,7 @@ static int per_file_stats(int id, void *ptr, void *data)

bound++;

- if (i915_is_ggtt(vma->vm)) {
+ if (vma->is_ggtt) {
stats->global += vma->node.size;
} else {
struct i915_hw_ppgtt *ppgtt
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index c9c1a5cdc1e5..f840cc55f1ab 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2905,18 +2905,11 @@ bool i915_gem_obj_is_pinned(struct drm_i915_gem_object *obj);
/* Some GGTT VM helpers */
#define i915_obj_to_ggtt(obj) \
(&((struct drm_i915_private *)(obj)->base.dev->dev_private)->gtt.base)
-static inline bool i915_is_ggtt(struct i915_address_space *vm)
-{
- struct i915_address_space *ggtt =
- &((struct drm_i915_private *)(vm)->dev->dev_private)->gtt.base;
- return vm == ggtt;
-}

static inline struct i915_hw_ppgtt *
i915_vm_to_ppgtt(struct i915_address_space *vm)
{
WARN_ON(i915_is_ggtt(vm));
-
return container_of(vm, struct i915_hw_ppgtt, base);
}

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 44bd514a6c2e..9a22fdd8a9f5 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2595,8 +2595,7 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
return ret;
}

- if (i915_is_ggtt(vma->vm) &&
- vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL) {
+ if (vma->is_ggtt && vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL) {
i915_gem_object_finish_gtt(obj);

/* release the fence reg _after_ flushing */
@@ -2611,7 +2610,7 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
vma->bound = 0;

list_del_init(&vma->vm_link);
- if (i915_is_ggtt(vma->vm)) {
+ if (vma->is_ggtt) {
if (vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL) {
obj->map_and_fenceable = false;
} else if (vma->ggtt_view.pages) {
@@ -3880,17 +3879,14 @@ struct i915_vma *i915_gem_obj_to_ggtt_view(struct drm_i915_gem_object *obj,

void i915_gem_vma_destroy(struct i915_vma *vma)
{
- struct i915_address_space *vm = NULL;
WARN_ON(vma->node.allocated);

/* Keep the vma as a placeholder in the execbuffer reservation lists */
if (!list_empty(&vma->exec_list))
return;

- vm = vma->vm;
-
- if (!i915_is_ggtt(vm))
- i915_ppgtt_put(i915_vm_to_ppgtt(vm));
+ if (!vma->is_ggtt)
+ i915_ppgtt_put(i915_vm_to_ppgtt(vma->vm));

list_del(&vma->obj_link);

@@ -4446,7 +4442,7 @@ u64 i915_gem_obj_offset(struct drm_i915_gem_object *o,
WARN_ON(vm == &dev_priv->mm.aliasing_ppgtt->base);

list_for_each_entry(vma, &o->vma_list, obj_link) {
- if (i915_is_ggtt(vma->vm) &&
+ if (vma->is_ggtt &&
vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL)
continue;
if (vma->vm == vm)
@@ -4479,7 +4475,7 @@ bool i915_gem_obj_bound(struct drm_i915_gem_object *o,
struct i915_vma *vma;

list_for_each_entry(vma, &o->vma_list, obj_link) {
- if (i915_is_ggtt(vma->vm) &&
+ if (vma->is_ggtt &&
vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL)
continue;
if (vma->vm == vm && drm_mm_node_allocated(&vma->node))
@@ -4526,7 +4522,7 @@ unsigned long i915_gem_obj_size(struct drm_i915_gem_object *o,
BUG_ON(list_empty(&o->vma_list));

list_for_each_entry(vma, &o->vma_list, obj_link) {
- if (i915_is_ggtt(vma->vm) &&
+ if (vma->is_ggtt &&
vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL)
continue;
if (vma->vm == vm)
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 56d6b5dbb121..c10795f58bfc 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -683,7 +683,7 @@ need_reloc_mappable(struct i915_vma *vma)
if (entry->relocation_count == 0)
return false;

- if (!i915_is_ggtt(vma->vm))
+ if (!vma->is_ggtt)
return false;

/* See also use_cpu_reloc() */
@@ -702,8 +702,7 @@ eb_vma_misplaced(struct i915_vma *vma)
struct drm_i915_gem_exec_object2 *entry = vma->exec_entry;
struct drm_i915_gem_object *obj = vma->obj;

- WARN_ON(entry->flags & __EXEC_OBJECT_NEEDS_MAP &&
- !i915_is_ggtt(vma->vm));
+ WARN_ON(entry->flags & __EXEC_OBJECT_NEEDS_MAP && !vma->is_ggtt);

if (entry->alignment &&
vma->node.start & (entry->alignment - 1))
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index b5c3bbe6dc2a..06117bd0fc00 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -3150,6 +3150,7 @@ int i915_gem_gtt_init(struct drm_device *dev)
}

gtt->base.dev = dev;
+ gtt->base.is_ggtt = true;

ret = gtt->gtt_probe(dev, &gtt->base.total, &gtt->stolen_size,
&gtt->mappable_base, &gtt->mappable_end);
@@ -3258,13 +3259,14 @@ __i915_gem_vma_create(struct drm_i915_gem_object *obj,
INIT_LIST_HEAD(&vma->exec_list);
vma->vm = vm;
vma->obj = obj;
+ vma->is_ggtt = i915_is_ggtt(vm);

if (i915_is_ggtt(vm))
vma->ggtt_view = *ggtt_view;
+ else
+ i915_ppgtt_get(i915_vm_to_ppgtt(vm));

list_add_tail(&vma->obj_link, &obj->vma_list);
- if (!i915_is_ggtt(vm))
- i915_ppgtt_get(i915_vm_to_ppgtt(vm));

return vma;
}
@@ -3536,13 +3538,9 @@ int i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
return 0;

if (vma->bound == 0 && vma->vm->allocate_va_range) {
- trace_i915_va_alloc(vma->vm,
- vma->node.start,
- vma->node.size,
- VM_TO_TRACE_NAME(vma->vm));
-
/* XXX: i915_vma_pin() will fix this +- hack */
vma->pin_count++;
+ trace_i915_va_alloc(vma);
ret = vma->vm->allocate_va_range(vma->vm,
vma->node.start,
vma->node.size);
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index cb796c1ff6a5..633b9b2e1acb 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -184,6 +184,7 @@ struct i915_vma {
#define GLOBAL_BIND (1<<0)
#define LOCAL_BIND (1<<1)
unsigned int bound : 4;
+ bool is_ggtt : 1;

/**
* Support different GGTT views into the same object.
@@ -276,6 +277,8 @@ struct i915_address_space {
u64 start; /* Start offset always 0 for dri2 */
u64 total; /* size addr space maps (ex. 2GB for ggtt) */

+ bool is_ggtt;
+
struct i915_page_scratch *scratch_page;
struct i915_page_table *scratch_pt;
struct i915_page_directory *scratch_pd;
@@ -331,6 +334,8 @@ struct i915_address_space {
u32 flags);
};

+#define i915_is_ggtt(V) ((V)->is_ggtt)
+
/* The Graphics Translation Table is the way in which GEN hardware translates a
* Graphics Virtual Address into a Physical Address. In addition to the normal
* collateral associated with any va->pa translations GEN hardware also has a
diff --git a/drivers/gpu/drm/i915/i915_trace.h b/drivers/gpu/drm/i915/i915_trace.h
index 85469e3c740a..e486dcef508d 100644
--- a/drivers/gpu/drm/i915/i915_trace.h
+++ b/drivers/gpu/drm/i915/i915_trace.h
@@ -175,35 +175,24 @@ TRACE_EVENT(i915_vma_unbind,
__entry->obj, __entry->offset, __entry->size, __entry->vm)
);

-#define VM_TO_TRACE_NAME(vm) \
- (i915_is_ggtt(vm) ? "G" : \
- "P")
-
-DECLARE_EVENT_CLASS(i915_va,
- TP_PROTO(struct i915_address_space *vm, u64 start, u64 length, const char *name),
- TP_ARGS(vm, start, length, name),
+TRACE_EVENT(i915_va_alloc,
+ TP_PROTO(struct i915_vma *vma),
+ TP_ARGS(vma),

TP_STRUCT__entry(
__field(struct i915_address_space *, vm)
__field(u64, start)
__field(u64, end)
- __string(name, name)
),

TP_fast_assign(
- __entry->vm = vm;
- __entry->start = start;
- __entry->end = start + length - 1;
- __assign_str(name, name);
+ __entry->vm = vma->vm;
+ __entry->start = vma->node.start;
+ __entry->end = vma->node.start + vma->node.size - 1;
),

- TP_printk("vm=%p (%s), 0x%llx-0x%llx",
- __entry->vm, __get_str(name), __entry->start, __entry->end)
-);
-
-DEFINE_EVENT(i915_va, i915_va_alloc,
- TP_PROTO(struct i915_address_space *vm, u64 start, u64 length, const char *name),
- TP_ARGS(vm, start, length, name)
+ TP_printk("vm=%p (%c), 0x%llx-0x%llx",
+ __entry->vm, i915_is_ggtt(__entry->vm) ? 'G' : 'P', __entry->start, __entry->end)
);

DECLARE_EVENT_CLASS(i915_px_entry,
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:53 UTC
Permalink
Remove some redundant kernel messages as we deduce a hung GPU and
capture the error state.

v2: Fix "hang" vs "no progress" message whilst I was there

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_irq.c | 21 +++++++--------------
1 file changed, 7 insertions(+), 14 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index d9757d227c86..ce52d7d9ad91 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -3031,8 +3031,7 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
struct drm_device *dev = dev_priv->dev;
struct intel_engine_cs *ring;
int i;
- int busy_count = 0, rings_hung = 0;
- bool stuck[I915_NUM_RINGS] = { 0 };
+ int busy_count = 0;
#define BUSY 1
#define KICK 5
#define HUNG 20
@@ -3108,7 +3107,6 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
break;
case HANGCHECK_HUNG:
ring->hangcheck.score += HUNG;
- stuck[i] = true;
break;
}
}
@@ -3134,17 +3132,12 @@ static void i915_hangcheck_elapsed(struct work_struct *work)
busy_count += busy;
}

- for_each_ring(ring, dev_priv, i) {
- if (ring->hangcheck.score >= HANGCHECK_SCORE_RING_HUNG) {
- DRM_INFO("%s on %s\n",
- stuck[i] ? "stuck" : "no progress",
- ring->name);
- rings_hung++;
- }
- }
-
- if (rings_hung)
- return i915_handle_error(dev, true, "Ring hung");
+ for_each_ring(ring, dev_priv, i)
+ if (ring->hangcheck.score >= HANGCHECK_SCORE_RING_HUNG)
+ return i915_handle_error(dev, true,
+ "%s on %s",
+ ring->hangcheck.action == HANGCHECK_HUNG ? "Hang" : "No progress" ,
+ ring->name);

/* Reset timer in case GPU hangs without another request being added */
if (busy_count)
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:25 UTC
Permalink
We use "list" to denote the list and "link" to denote an element on that
list. Rename request->list to match this idiom.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 4 ++--
drivers/gpu/drm/i915/i915_gem.c | 12 ++++++------
drivers/gpu/drm/i915/i915_gem_request.c | 10 +++++-----
drivers/gpu/drm/i915/i915_gem_request.h | 4 ++--
drivers/gpu/drm/i915/i915_gpu_error.c | 4 ++--
drivers/gpu/drm/i915/intel_ringbuffer.c | 6 +++---
6 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 65cb1d6a5d64..efa9572fc217 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -695,13 +695,13 @@ static int i915_gem_request_info(struct seq_file *m, void *data)
int count;

count = 0;
- list_for_each_entry(req, &ring->request_list, list)
+ list_for_each_entry(req, &ring->request_list, link)
count++;
if (count == 0)
continue;

seq_printf(m, "%s requests: %d\n", ring->name, count);
- list_for_each_entry(req, &ring->request_list, list) {
+ list_for_each_entry(req, &ring->request_list, link) {
struct task_struct *task;

rcu_read_lock();
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 77c253ddf060..f314b3ea2726 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2183,7 +2183,7 @@ i915_gem_find_active_request(struct intel_engine_cs *ring)
* extra delay for a recent interrupt is pointless. Hence, we do
* not need an engine->irq_seqno_barrier() before the seqno reads.
*/
- list_for_each_entry(request, &ring->request_list, list) {
+ list_for_each_entry(request, &ring->request_list, link) {
if (i915_gem_request_completed(request))
continue;

@@ -2208,7 +2208,7 @@ static void i915_gem_reset_ring_status(struct intel_engine_cs *ring)

i915_set_reset_status(dev_priv, request->ctx, ring_hung);

- list_for_each_entry_continue(request, &ring->request_list, list)
+ list_for_each_entry_continue(request, &ring->request_list, link)
i915_set_reset_status(dev_priv, request->ctx, false);
}

@@ -2255,7 +2255,7 @@ static void i915_gem_reset_ring_cleanup(struct intel_engine_cs *engine)

request = list_last_entry(&engine->request_list,
struct drm_i915_gem_request,
- list);
+ link);

i915_gem_request_retire_upto(request);
}
@@ -2317,7 +2317,7 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)

request = list_first_entry(&ring->request_list,
struct drm_i915_gem_request,
- list);
+ link);

if (!i915_gem_request_completed(request))
break;
@@ -2336,7 +2336,7 @@ i915_gem_retire_requests_ring(struct intel_engine_cs *ring)
struct drm_i915_gem_object,
ring_list[ring->id]);

- if (!list_empty(&obj->last_read[ring->id].request->list))
+ if (!list_empty(&obj->last_read[ring->id].request->link))
break;

i915_gem_object_retire__read(obj, ring->id);
@@ -2449,7 +2449,7 @@ i915_gem_object_flush_active(struct drm_i915_gem_object *obj)
if (req == NULL)
continue;

- if (list_empty(&req->list))
+ if (list_empty(&req->link))
goto retire;

if (i915_gem_request_completed(req)) {
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 01443d8d9224..7f38d8972721 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -333,7 +333,7 @@ void i915_gem_request_cancel(struct drm_i915_gem_request *req)
static void i915_gem_request_retire(struct drm_i915_gem_request *request)
{
trace_i915_gem_request_retire(request);
- list_del_init(&request->list);
+ list_del_init(&request->link);

/* We know the GPU must have read the request to have
* sent us the seqno + interrupt, so use the position
@@ -355,12 +355,12 @@ i915_gem_request_retire_upto(struct drm_i915_gem_request *req)

lockdep_assert_held(&engine->dev->struct_mutex);

- if (list_empty(&req->list))
+ if (list_empty(&req->link))
return;

do {
tmp = list_first_entry(&engine->request_list,
- typeof(*tmp), list);
+ typeof(*tmp), link);

i915_gem_request_retire(tmp);
} while (tmp != req);
@@ -451,7 +451,7 @@ void __i915_add_request(struct drm_i915_gem_request *request,
request->emitted_jiffies = jiffies;
request->previous_seqno = request->engine->last_submitted_seqno;
request->engine->last_submitted_seqno = request->fence.seqno;
- list_add_tail(&request->list, &request->engine->request_list);
+ list_add_tail(&request->link, &request->engine->request_list);

trace_i915_gem_request_add(request);

@@ -565,7 +565,7 @@ int __i915_wait_request(struct drm_i915_gem_request *req,

might_sleep();

- if (list_empty(&req->list))
+ if (list_empty(&req->link))
return 0;

if (i915_gem_request_completed(req))
diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index 0a21986c332b..01d589be95fd 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -88,8 +88,8 @@ struct drm_i915_gem_request {
/** Time at which this request was emitted, in jiffies. */
unsigned long emitted_jiffies;

- /** global list entry for this request */
- struct list_head list;
+ /** engine->request_list entry for this request */
+ struct list_head link;

struct drm_i915_file_private *file_priv;
/** file_priv list entry for this request */
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index 5027636e3624..c812079bc25c 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1056,7 +1056,7 @@ static void i915_gem_record_rings(struct drm_device *dev,
i915_gem_record_active_context(engine, error, &error->ring[i]);

count = 0;
- list_for_each_entry(request, &engine->request_list, list)
+ list_for_each_entry(request, &engine->request_list, link)
count++;

error->ring[i].num_requests = count;
@@ -1069,7 +1069,7 @@ static void i915_gem_record_rings(struct drm_device *dev,
}

count = 0;
- list_for_each_entry(request, &engine->request_list, list) {
+ list_for_each_entry(request, &engine->request_list, link) {
struct drm_i915_error_request *erq;

if (count >= error->ring[i].num_requests) {
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index d37cdb2f9073..213540f92c9d 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -2109,7 +2109,7 @@ int intel_engine_idle(struct intel_engine_cs *ring)

req = list_entry(ring->request_list.prev,
struct drm_i915_gem_request,
- list);
+ link);

/* Make sure we do not trigger any retires */
return __i915_wait_request(req,
@@ -2184,7 +2184,7 @@ static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
/* The whole point of reserving space is to not wait! */
WARN_ON(ring->reserved_in_use);

- list_for_each_entry(target, &engine->request_list, list) {
+ list_for_each_entry(target, &engine->request_list, link) {
/*
* The request queue is per-engine, so can contain requests
* from multiple ringbuffers. Here, we must ignore any that
@@ -2200,7 +2200,7 @@ static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
break;
}

- if (WARN_ON(&target->list == &engine->request_list))
+ if (WARN_ON(&target->link == &engine->request_list))
return -ENOSPC;

ret = i915_wait_request(target);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:23 UTC
Permalink
Given that the intel_lr_context_pin cannot succeed without the object,
we cannot reach intel_lr_context_unpin() without first allocating that
object - so we can remove the redundant test.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/intel_lrc.c | 19 ++++++++-----------
1 file changed, 8 insertions(+), 11 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 84a8bcc90d78..0f0bf97e4032 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -769,17 +769,14 @@ static int intel_lr_context_pin(struct drm_i915_gem_request *rq)
void intel_lr_context_unpin(struct drm_i915_gem_request *rq)
{
int engine = rq->engine->id;
- struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[engine].state;
- struct intel_ring *ring = rq->ring;
-
- if (ctx_obj) {
- WARN_ON(!mutex_is_locked(&rq->i915->dev->struct_mutex));
- if (--rq->ctx->engine[engine].pin_count == 0) {
- intel_ring_unmap(ring);
- i915_gem_object_ggtt_unpin(ctx_obj);
- i915_gem_context_unreference(rq->ctx);
- }
- }
+
+ WARN_ON(!mutex_is_locked(&rq->i915->dev->struct_mutex));
+ if (--rq->ctx->engine[engine].pin_count)
+ return;
+
+ intel_ring_unmap(rq->ring);
+ i915_gem_object_ggtt_unpin(rq->ctx->engine[engine].state);
+ i915_gem_context_unreference(rq->ctx);
}

static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:00 UTC
Permalink
Since

commit a6f766f3975185af66a31a2cea2cd38721645999
Author: Chris Wilson <***@chris-wilson.co.uk>
Date: Mon Apr 27 13:41:20 2015 +0100

drm/i915: Limit ring synchronisation (sw sempahores) RPS boosts

and

commit bcafc4e38b6ad03f48989b7ecaff03845b5b7acf
Author: Chris Wilson <***@chris-wilson.co.uk>
Date: Mon Apr 27 13:41:21 2015 +0100

drm/i915: Limit mmio flip RPS boosts

we have limited the waitboosting for semaphores and flips. Ideally we do
not want to boost in either of these instances as no consumer is waiting
upon the results. With the introduction of NO_WAITBOOST in the previous
patch, we can finally disable these needless boosts.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 8 +-------
drivers/gpu/drm/i915/i915_drv.h | 2 --
drivers/gpu/drm/i915/i915_gem.c | 2 +-
drivers/gpu/drm/i915/intel_display.c | 2 +-
drivers/gpu/drm/i915/intel_pm.c | 2 --
5 files changed, 3 insertions(+), 13 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index b82482573a8f..5335072f2047 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -2398,13 +2398,7 @@ static int i915_rps_boost_info(struct seq_file *m, void *data)
list_empty(&file_priv->rps.link) ? "" : ", active");
rcu_read_unlock();
}
- seq_printf(m, "Semaphore boosts: %d%s\n",
- dev_priv->rps.semaphores.boosts,
- list_empty(&dev_priv->rps.semaphores.link) ? "" : ", active");
- seq_printf(m, "MMIO flip boosts: %d%s\n",
- dev_priv->rps.mmioflips.boosts,
- list_empty(&dev_priv->rps.mmioflips.link) ? "" : ", active");
- seq_printf(m, "Kernel boosts: %d\n", dev_priv->rps.boosts);
+ seq_printf(m, "Kernel (anonymous) boosts: %d\n", dev_priv->rps.boosts);
spin_unlock(&dev_priv->rps.client_lock);

return 0;
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index ee146ce02412..49a151126b2a 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -1136,8 +1136,6 @@ struct intel_gen6_power_mgmt {
struct delayed_work delayed_resume_work;
unsigned boosts;

- struct intel_rps_client semaphores, mmioflips;
-
/* manual wa residency calculations */
struct intel_rps_ei up_ei, down_ei;

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index fd61e722b595..9df00e694cd9 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2533,7 +2533,7 @@ __i915_gem_object_sync(struct drm_i915_gem_object *obj,
ret = __i915_wait_request(from_req,
i915->mm.interruptible,
NULL,
- &i915->rps.semaphores);
+ NO_WAITBOOST);
if (ret)
return ret;

diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index ae247927e931..e2822530af25 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11430,7 +11430,7 @@ static void intel_mmio_flip_work_func(struct work_struct *work)
if (mmio_flip->req) {
WARN_ON(__i915_wait_request(mmio_flip->req,
false, NULL,
- &mmio_flip->i915->rps.mmioflips));
+ NO_WAITBOOST));
i915_gem_request_put(mmio_flip->req);
}

diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 39b7ca9c3e66..b340f2a1f110 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -7324,8 +7324,6 @@ void intel_pm_setup(struct drm_device *dev)
INIT_DELAYED_WORK(&dev_priv->rps.delayed_resume_work,
intel_gen6_powersave_work);
INIT_LIST_HEAD(&dev_priv->rps.clients);
- INIT_LIST_HEAD(&dev_priv->rps.semaphores.link);
- INIT_LIST_HEAD(&dev_priv->rps.mmioflips.link);

dev_priv->pm.suspended = false;
atomic_set(&dev_priv->pm.wakeref_count, 0);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:37 UTC
Permalink
When the user closes the context mark it and the dependent address space
as closed. As we use an asynchronous destruct method, this has two purposes.
First it allows us to flag the closed context and detect internal errors if
we to create any new objects for it (as it is removed from the user's
namespace, these should be internal bugs only). And secondly, it allows
us to immediately reap stale vma.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_drv.h | 3 +++
drivers/gpu/drm/i915/i915_gem.c | 17 +++++++-------
drivers/gpu/drm/i915/i915_gem_context.c | 40 +++++++++++++++++++++++++++++----
drivers/gpu/drm/i915/i915_gem_gtt.c | 9 ++++++--
drivers/gpu/drm/i915/i915_gem_gtt.h | 9 ++++++++
drivers/gpu/drm/i915/i915_gem_stolen.c | 2 +-
6 files changed, 65 insertions(+), 15 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 262d1b247344..fc35a9b8d910 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -888,6 +888,8 @@ struct intel_context {
} engine[I915_NUM_RINGS];

struct list_head link;
+
+ bool closed:1;
};

enum fb_op_origin {
@@ -2707,6 +2709,7 @@ int __must_check i915_vma_unbind(struct i915_vma *vma);
* _guarantee_ VMA in question is _not in use_ anywhere.
*/
int __must_check __i915_vma_unbind_no_wait(struct i915_vma *vma);
+void i915_vma_close(struct i915_vma *vma);

int i915_gem_object_unbind(struct drm_i915_gem_object *obj);
int i915_gem_object_put_pages(struct drm_i915_gem_object *obj);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 1f95cf39b7d2..16ee3bd7010e 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2385,7 +2385,7 @@ i915_gem_object_flush_active(struct drm_i915_gem_object *obj)
}
}

-static void i915_vma_close(struct i915_vma *vma)
+void i915_vma_close(struct i915_vma *vma)
{
GEM_BUG_ON(vma->closed);
vma->closed = true;
@@ -2654,12 +2654,15 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
return ret;
}

- trace_i915_vma_unbind(vma);
-
- vma->vm->unbind_vma(vma);
+ if (likely(!vma->vm->closed)) {
+ trace_i915_vma_unbind(vma);
+ vma->vm->unbind_vma(vma);
+ }
vma->bound = 0;

- list_del_init(&vma->vm_link);
+ drm_mm_remove_node(&vma->node);
+ list_move_tail(&vma->vm_link, &vma->vm->unbound_list);
+
if (vma->is_ggtt) {
if (vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL) {
obj->map_and_fenceable = false;
@@ -2670,8 +2673,6 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
vma->ggtt_view.pages = NULL;
}

- drm_mm_remove_node(&vma->node);
-
/* Since the unbound list is global, only move to that list if
* no more VMAs exist. */
if (--obj->bind_count == 0)
@@ -2917,7 +2918,7 @@ search_free:
goto err_remove_node;

list_move_tail(&obj->global_list, &dev_priv->mm.bound_list);
- list_add_tail(&vma->vm_link, &vm->inactive_list);
+ list_move_tail(&vma->vm_link, &vm->inactive_list);
obj->bind_count++;

return vma;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 310a770b7984..4583d8fe3585 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -153,6 +153,7 @@ void i915_gem_context_free(struct kref *ctx_ref)
struct intel_context *ctx = container_of(ctx_ref, typeof(*ctx), ref);

trace_i915_context_free(ctx);
+ GEM_BUG_ON(!ctx->closed);

if (i915.enable_execlists)
intel_lr_context_free(ctx);
@@ -209,6 +210,37 @@ i915_gem_alloc_context_obj(struct drm_device *dev, size_t size)
return obj;
}

+static void i915_ppgtt_close(struct i915_address_space *vm)
+{
+ struct list_head *phases[] = {
+ &vm->active_list,
+ &vm->inactive_list,
+ &vm->unbound_list,
+ NULL,
+ }, **phase;
+
+ GEM_BUG_ON(i915_is_ggtt(vm));
+ GEM_BUG_ON(vm->closed);
+ vm->closed = true;
+
+ for (phase = phases; *phase; phase++) {
+ struct i915_vma *vma, *vn;
+
+ list_for_each_entry_safe(vma, vn, *phase, vm_link)
+ if (!vma->closed)
+ i915_vma_close(vma);
+ }
+}
+
+static void context_close(struct intel_context *ctx)
+{
+ GEM_BUG_ON(ctx->closed);
+ ctx->closed = true;
+ if (ctx->ppgtt)
+ i915_ppgtt_close(&ctx->ppgtt->base);
+ i915_gem_context_unreference(ctx);
+}
+
static struct intel_context *
__create_hw_context(struct drm_device *dev,
struct drm_i915_file_private *file_priv)
@@ -256,7 +288,7 @@ __create_hw_context(struct drm_device *dev,
return ctx;

err_out:
- i915_gem_context_unreference(ctx);
+ context_close(ctx);
return ERR_PTR(ret);
}

@@ -318,7 +350,7 @@ err_unpin:
i915_gem_object_ggtt_unpin(ctx->legacy_hw_ctx.rcs_state);
err_destroy:
idr_remove(&file_priv->context_idr, ctx->user_handle);
- i915_gem_context_unreference(ctx);
+ context_close(ctx);
return ERR_PTR(ret);
}

@@ -474,7 +506,7 @@ static int context_idr_cleanup(int id, void *p, void *data)
{
struct intel_context *ctx = p;

- i915_gem_context_unreference(ctx);
+ context_close(ctx);
return 0;
}

@@ -894,7 +926,7 @@ int i915_gem_context_destroy_ioctl(struct drm_device *dev, void *data,
}

idr_remove(&ctx->file_priv->context_idr, ctx->user_handle);
- i915_gem_context_unreference(ctx);
+ context_close(ctx);
mutex_unlock(&dev->struct_mutex);

DRM_DEBUG_DRIVER("HW context %d destroyed\n", args->ctx_id);
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index ef093db6b8a6..ad26c9e331aa 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2130,6 +2130,7 @@ static void i915_address_space_init(struct i915_address_space *vm,
vm->dev = dev_priv->dev;
INIT_LIST_HEAD(&vm->active_list);
INIT_LIST_HEAD(&vm->inactive_list);
+ INIT_LIST_HEAD(&vm->unbound_list);
list_add_tail(&vm->global_link, &dev_priv->vm_list);
}

@@ -2214,9 +2215,10 @@ void i915_ppgtt_release(struct kref *kref)

trace_i915_ppgtt_release(&ppgtt->base);

- /* vmas should already be unbound */
+ /* vmas should already be unbound and destroyed */
WARN_ON(!list_empty(&ppgtt->base.active_list));
WARN_ON(!list_empty(&ppgtt->base.inactive_list));
+ WARN_ON(!list_empty(&ppgtt->base.unbound_list));

list_del(&ppgtt->base.global_link);
drm_mm_takedown(&ppgtt->base.mm);
@@ -3269,6 +3271,8 @@ __i915_gem_vma_create(struct drm_i915_gem_object *obj,
struct i915_vma *vma;
int i;

+ GEM_BUG_ON(vm->closed);
+
if (WARN_ON(i915_is_ggtt(vm) != !!ggtt_view))
return ERR_PTR(-EINVAL);

@@ -3276,11 +3280,11 @@ __i915_gem_vma_create(struct drm_i915_gem_object *obj,
if (vma == NULL)
return ERR_PTR(-ENOMEM);

- INIT_LIST_HEAD(&vma->vm_link);
INIT_LIST_HEAD(&vma->obj_link);
INIT_LIST_HEAD(&vma->exec_list);
for (i = 0; i < ARRAY_SIZE(vma->last_read); i++)
init_request_active(&vma->last_read[i], i915_vma_retire);
+ list_add(&vma->vm_link, &vm->unbound_list);
vma->vm = vm;
vma->obj = obj;
vma->is_ggtt = i915_is_ggtt(vm);
@@ -3327,6 +3331,7 @@ i915_gem_obj_lookup_or_create_ggtt_vma(struct drm_i915_gem_object *obj,
if (!vma)
vma = __i915_gem_vma_create(obj, ggtt, view);

+ GEM_BUG_ON(vma->closed);
return vma;

}
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index d68d5fd02923..6346d1786d41 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -292,6 +292,8 @@ struct i915_address_space {
u64 start; /* Start offset always 0 for dri2 */
u64 total; /* size addr space maps (ex. 2GB for ggtt) */

+ bool closed;
+
struct i915_page_scratch *scratch_page;
struct i915_page_table *scratch_pt;
struct i915_page_directory *scratch_pd;
@@ -320,6 +322,13 @@ struct i915_address_space {
*/
struct list_head inactive_list;

+ /**
+ * List of vma that have been unbound.
+ *
+ * A reference is not held on the buffer while on this list.
+ */
+ struct list_head unbound_list;
+
/* FIXME: Need a more generic return type */
gen6_pte_t (*pte_encode)(dma_addr_t addr,
enum i915_cache_level level,
diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
index 1c81a1470baf..c110563823bd 100644
--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
@@ -692,7 +692,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,

vma->bound |= GLOBAL_BIND;
__i915_vma_set_map_and_fenceable(vma);
- list_add_tail(&vma->vm_link, &ggtt->inactive_list);
+ list_move_tail(&vma->vm_link, &ggtt->inactive_list);
obj->bind_count++;

list_add_tail(&obj->global_list, &dev_priv->mm.bound_list);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:14 UTC
Permalink
Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 21 +++---
drivers/gpu/drm/i915/i915_drv.h | 2 +-
drivers/gpu/drm/i915/i915_gem.c | 43 ++++++------
drivers/gpu/drm/i915/i915_gem_context.c | 2 +-
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 4 +-
drivers/gpu/drm/i915/i915_gem_gtt.c | 6 +-
drivers/gpu/drm/i915/i915_gem_request.c | 2 +-
drivers/gpu/drm/i915/i915_gem_request.h | 2 +-
drivers/gpu/drm/i915/i915_gpu_error.c | 2 +-
drivers/gpu/drm/i915/i915_guc_submission.c | 2 +-
drivers/gpu/drm/i915/intel_display.c | 10 +--
drivers/gpu/drm/i915/intel_lrc.c | 40 ++++++------
drivers/gpu/drm/i915/intel_mocs.c | 4 +-
drivers/gpu/drm/i915/intel_ringbuffer.c | 101 ++++++++++++++---------------
drivers/gpu/drm/i915/intel_ringbuffer.h | 45 ++++++-------
15 files changed, 138 insertions(+), 148 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index dec10784c2bc..8de944ed3369 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -1948,12 +1948,11 @@ static int i915_gem_framebuffer_info(struct seq_file *m, void *data)
return 0;
}

-static void describe_ctx_ringbuf(struct seq_file *m,
- struct intel_ringbuffer *ringbuf)
+static void describe_ctx_ring(struct seq_file *m, struct intel_ring *ring)
{
seq_printf(m, " (ringbuffer, space: %d, head: %u, tail: %u, last head: %d)",
- ringbuf->space, ringbuf->head, ringbuf->tail,
- ringbuf->last_retired_head);
+ ring->space, ring->head, ring->tail,
+ ring->last_retired_head);
}

static int i915_context_status(struct seq_file *m, void *unused)
@@ -1985,16 +1984,12 @@ static int i915_context_status(struct seq_file *m, void *unused)
if (i915.enable_execlists) {
seq_putc(m, '\n');
for_each_ring(ring, dev_priv, i) {
- struct drm_i915_gem_object *ctx_obj =
- ctx->engine[i].state;
- struct intel_ringbuffer *ringbuf =
- ctx->engine[i].ring;
-
seq_printf(m, "%s: ", ring->name);
- if (ctx_obj)
- describe_obj(m, ctx_obj);
- if (ringbuf)
- describe_ctx_ringbuf(m, ringbuf);
+ if (ctx->engine[i].state)
+ describe_obj(m, ctx->engine[i].state);
+ if (ctx->engine[i].ring)
+ describe_ctx_ring(m,
+ ctx->engine[i].ring);
seq_putc(m, '\n');
}
} else {
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 466adc6617f0..44e8738c5310 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -885,7 +885,7 @@ struct intel_context {
/* Execlists */
struct {
struct drm_i915_gem_object *state;
- struct intel_ringbuffer *ring;
+ struct intel_ring *ring;
int pin_count;
} engine[I915_NUM_RINGS];

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index a81cad666d3a..1c6beb154d07 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2193,9 +2193,9 @@ i915_gem_find_active_request(struct intel_engine_cs *ring)
return NULL;
}

-static void i915_gem_reset_ring_status(struct drm_i915_private *dev_priv,
- struct intel_engine_cs *ring)
+static void i915_gem_reset_ring_status(struct intel_engine_cs *ring)
{
+ struct drm_i915_private *dev_priv = ring->i915;
struct drm_i915_gem_request *request;
bool ring_hung;

@@ -2212,19 +2212,18 @@ static void i915_gem_reset_ring_status(struct drm_i915_private *dev_priv,
i915_set_reset_status(dev_priv, request->ctx, false);
}

-static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
- struct intel_engine_cs *ring)
+static void i915_gem_reset_ring_cleanup(struct intel_engine_cs *engine)
{
- struct intel_ringbuffer *buffer;
+ struct intel_ring *ring;

- while (!list_empty(&ring->active_list)) {
+ while (!list_empty(&engine->active_list)) {
struct drm_i915_gem_object *obj;

- obj = list_first_entry(&ring->active_list,
+ obj = list_first_entry(&engine->active_list,
struct drm_i915_gem_object,
- ring_list[ring->id]);
+ ring_list[engine->id]);

- i915_gem_object_retire__read(obj, ring->id);
+ i915_gem_object_retire__read(obj, engine->id);
}

/*
@@ -2234,14 +2233,14 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
*/

if (i915.enable_execlists) {
- spin_lock_irq(&ring->execlist_lock);
+ spin_lock_irq(&engine->execlist_lock);

/* list_splice_tail_init checks for empty lists */
- list_splice_tail_init(&ring->execlist_queue,
- &ring->execlist_retired_req_list);
+ list_splice_tail_init(&engine->execlist_queue,
+ &engine->execlist_retired_req_list);

- spin_unlock_irq(&ring->execlist_lock);
- intel_execlists_retire_requests(ring);
+ spin_unlock_irq(&engine->execlist_lock);
+ intel_execlists_retire_requests(engine);
}

/*
@@ -2251,10 +2250,10 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
* implicit references on things like e.g. ppgtt address spaces through
* the request.
*/
- if (!list_empty(&ring->request_list)) {
+ if (!list_empty(&engine->request_list)) {
struct drm_i915_gem_request *request;

- request = list_last_entry(&ring->request_list,
+ request = list_last_entry(&engine->request_list,
struct drm_i915_gem_request,
list);

@@ -2268,12 +2267,12 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
* upon reset is less than when we start. Do one more pass over
* all the ringbuffers to reset last_retired_head.
*/
- list_for_each_entry(buffer, &ring->buffers, link) {
- buffer->last_retired_head = buffer->tail;
- intel_ring_update_space(buffer);
+ list_for_each_entry(ring, &engine->buffers, link) {
+ ring->last_retired_head = ring->tail;
+ intel_ring_update_space(ring);
}

- intel_engine_init_seqno(ring, ring->last_submitted_seqno);
+ intel_engine_init_seqno(engine, engine->last_submitted_seqno);
}

void i915_gem_reset(struct drm_device *dev)
@@ -2288,10 +2287,10 @@ void i915_gem_reset(struct drm_device *dev)
* their reference to the objects, the inspection must be done first.
*/
for_each_ring(ring, dev_priv, i)
- i915_gem_reset_ring_status(dev_priv, ring);
+ i915_gem_reset_ring_status(ring);

for_each_ring(ring, dev_priv, i)
- i915_gem_reset_ring_cleanup(dev_priv, ring);
+ i915_gem_reset_ring_cleanup(ring);

i915_gem_context_reset(dev);

diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index ac2e205fe3b4..17fe8ed991d6 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -519,7 +519,7 @@ i915_gem_context_get(struct drm_i915_file_private *file_priv, u32 id)
static inline int
mi_set_context(struct drm_i915_gem_request *req, u32 hw_flags)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
u32 flags = hw_flags | MI_MM_SPACE_GTT;
const int num_rings =
/* Use an extended w/a on ivb+ if signalling from other rings */
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index b7c90072f7d4..731ce13dbdbc 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1148,7 +1148,7 @@ i915_gem_execbuffer_retire_commands(struct i915_execbuffer_params *params)
static int
i915_reset_gen7_sol_offsets(struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
int ret, i;

if (!IS_GEN7(req->i915) || req->engine->id != RCS) {
@@ -1229,7 +1229,7 @@ i915_gem_ringbuffer_submission(struct i915_execbuffer_params *params,
struct drm_i915_gem_execbuffer2 *args,
struct list_head *vmas)
{
- struct intel_ringbuffer *ring = params->request->ring;
+ struct intel_ring *ring = params->request->ring;
struct drm_i915_private *dev_priv = params->request->i915;
u64 exec_start, exec_len;
int instp_mode;
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 38c109cda904..9a91451d66ac 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -656,7 +656,7 @@ static int gen8_write_pdp(struct drm_i915_gem_request *req,
unsigned entry,
dma_addr_t addr)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
int ret;

BUG_ON(entry >= 4);
@@ -1648,7 +1648,7 @@ static uint32_t get_pd_offset(struct i915_hw_ppgtt *ppgtt)
static int hsw_mm_switch(struct i915_hw_ppgtt *ppgtt,
struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
int ret;

/* NB: TLBs must be flushed and invalidated before a switch */
@@ -1686,7 +1686,7 @@ static int vgpu_mm_switch(struct i915_hw_ppgtt *ppgtt,
static int gen7_mm_switch(struct i915_hw_ppgtt *ppgtt,
struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
int ret;

/* NB: TLBs must be flushed and invalidated before a switch */
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 54834ad1bf5e..e1f2af046b6c 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -401,7 +401,7 @@ void __i915_add_request(struct drm_i915_gem_request *request,
struct drm_i915_gem_object *obj,
bool flush_caches)
{
- struct intel_ringbuffer *ring;
+ struct intel_ring *ring;
u32 request_start;
int ret;

diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index cd4412f6e7e3..086950567db4 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -79,7 +79,7 @@ struct drm_i915_gem_request {
* context.
*/
struct intel_context *ctx;
- struct intel_ringbuffer *ring;
+ struct intel_ring *ring;

/** Batch buffer related to this request if any (used for
error state dump only) */
diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index f27d6d1b64d6..2785f2d1f073 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -1007,7 +1007,7 @@ static void i915_gem_record_rings(struct drm_device *dev,
request = i915_gem_find_active_request(engine);
if (request) {
struct i915_address_space *vm;
- struct intel_ringbuffer *ring;
+ struct intel_ring *ring;

vm = request->ctx && request->ctx->ppgtt ?
&request->ctx->ppgtt->base :
diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
index 39ccfa8934e3..5a6251926367 100644
--- a/drivers/gpu/drm/i915/i915_guc_submission.c
+++ b/drivers/gpu/drm/i915/i915_guc_submission.c
@@ -390,7 +390,7 @@ static void guc_init_ctx_desc(struct intel_guc *guc,

for (i = 0; i < I915_NUM_RINGS; i++) {
struct guc_execlist_context *lrc = &desc.lrc[i];
- struct intel_ringbuffer *ring = ctx->engine[i].ring;
+ struct intel_ring *ring = ctx->engine[i].ring;
struct intel_engine_cs *engine;
struct drm_i915_gem_object *obj;
uint64_t ctx_desc;
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 0d42356f15b4..f8717c5627dd 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11052,7 +11052,7 @@ static int intel_gen2_queue_flip(struct drm_device *dev,
struct drm_i915_gem_request *req,
uint32_t flags)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
u32 flip_mask;
int ret;
@@ -11087,7 +11087,7 @@ static int intel_gen3_queue_flip(struct drm_device *dev,
struct drm_i915_gem_request *req,
uint32_t flags)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
u32 flip_mask;
int ret;
@@ -11119,7 +11119,7 @@ static int intel_gen4_queue_flip(struct drm_device *dev,
struct drm_i915_gem_request *req,
uint32_t flags)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
struct drm_i915_private *dev_priv = req->i915;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
uint32_t pf, pipesrc;
@@ -11158,7 +11158,7 @@ static int intel_gen6_queue_flip(struct drm_device *dev,
struct drm_i915_gem_request *req,
uint32_t flags)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
struct drm_i915_private *dev_priv = req->i915;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
uint32_t pf, pipesrc;
@@ -11194,7 +11194,7 @@ static int intel_gen7_queue_flip(struct drm_device *dev,
struct drm_i915_gem_request *req,
uint32_t flags)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
uint32_t plane_bit = 0;
int len, ret;
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 92ae7bc532ed..fa4c0c0db994 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -449,7 +449,7 @@ static void execlists_context_unqueue(struct intel_engine_cs *engine)
* for where we prepare the padding after the end of the
* request.
*/
- struct intel_ringbuffer *ring;
+ struct intel_ring *ring;

ring = req0->ctx->engine[engine->id].ring;
req0->tail += 8;
@@ -742,7 +742,7 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
struct drm_device *dev = params->dev;
struct intel_engine_cs *engine = params->ring;
struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_ringbuffer *ring = params->request->ring;
+ struct intel_ring *ring = params->request->ring;
u64 exec_start;
int instp_mode;
u32 instp_mask;
@@ -878,7 +878,7 @@ int logical_ring_flush_all_caches(struct drm_i915_gem_request *req)

static int intel_lr_context_do_pin(struct intel_engine_cs *ring,
struct drm_i915_gem_object *ctx_obj,
- struct intel_ringbuffer *ringbuf)
+ struct intel_ring *ringbuf)
{
struct drm_i915_private *dev_priv = ring->i915;
int ret = 0;
@@ -889,7 +889,7 @@ static int intel_lr_context_do_pin(struct intel_engine_cs *ring,
if (ret)
return ret;

- ret = intel_pin_and_map_ringbuffer_obj(ring->dev, ringbuf);
+ ret = intel_pin_and_map_ring(ring->dev, ringbuf);
if (ret)
goto unpin_ctx_obj;

@@ -931,12 +931,12 @@ void intel_lr_context_unpin(struct drm_i915_gem_request *rq)
{
int engine = rq->engine->id;
struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[engine].state;
- struct intel_ringbuffer *ring = rq->ring;
+ struct intel_ring *ring = rq->ring;

if (ctx_obj) {
WARN_ON(!mutex_is_locked(&rq->i915->dev->struct_mutex));
if (--rq->ctx->engine[engine].pin_count == 0) {
- intel_unpin_ringbuffer_obj(ring);
+ intel_unpin_ring(ring);
i915_gem_object_ggtt_unpin(ctx_obj);
i915_gem_context_unreference(rq->ctx);
}
@@ -947,7 +947,7 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
{
int ret, i;
struct intel_engine_cs *engine = req->engine;
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
struct drm_i915_private *dev_priv = req->i915;
struct i915_workarounds *w = &dev_priv->workarounds;

@@ -1417,7 +1417,7 @@ static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
{
struct i915_hw_ppgtt *ppgtt = req->ctx->ppgtt;
struct intel_engine_cs *engine = req->engine;
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
const int num_lri_cmds = GEN8_LEGACY_PDPES * 2;
int i, ret;

@@ -1444,7 +1444,7 @@ static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
u64 offset, unsigned dispatch_flags)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
bool ppgtt = !(dispatch_flags & I915_DISPATCH_SECURE);
int ret;

@@ -1503,7 +1503,7 @@ static int gen8_emit_flush(struct drm_i915_gem_request *request,
u32 invalidate_domains,
u32 unused)
{
- struct intel_ringbuffer *ring = request->ring;
+ struct intel_ring *ring = request->ring;
uint32_t cmd;
int ret;

@@ -1541,7 +1541,7 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
u32 invalidate_domains,
u32 flush_domains)
{
- struct intel_ringbuffer *ring = request->ring;
+ struct intel_ring *ring = request->ring;
u32 scratch_addr = request->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
bool vf_flush_wa = false;
u32 flags = 0;
@@ -1620,7 +1620,7 @@ gen6_seqno_barrier(struct intel_engine_cs *ring)

static int gen8_emit_request(struct drm_i915_gem_request *request)
{
- struct intel_ringbuffer *ring = request->ring;
+ struct intel_ring *ring = request->ring;
u32 cmd;
int ret;

@@ -2039,7 +2039,7 @@ make_rpcs(struct drm_device *dev)

static int
populate_lr_context(struct intel_context *ctx, struct drm_i915_gem_object *ctx_obj,
- struct intel_engine_cs *ring, struct intel_ringbuffer *ringbuf)
+ struct intel_engine_cs *ring, struct intel_ring *ringbuf)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
@@ -2174,15 +2174,15 @@ void intel_lr_context_free(struct intel_context *ctx)
struct drm_i915_gem_object *ctx_obj = ctx->engine[i].state;

if (ctx_obj) {
- struct intel_ringbuffer *ring = ctx->engine[i].ring;
+ struct intel_ring *ring = ctx->engine[i].ring;
struct intel_engine_cs *engine = ring->engine;

if (ctx == engine->default_context) {
- intel_unpin_ringbuffer_obj(ring);
+ intel_unpin_ring(ring);
i915_gem_object_ggtt_unpin(ctx_obj);
}
WARN_ON(ctx->engine[engine->id].pin_count);
- intel_ringbuffer_free(ring);
+ intel_ring_free(ring);
drm_gem_object_unreference(&ctx_obj->base);
}
}
@@ -2262,7 +2262,7 @@ int intel_lr_context_deferred_alloc(struct intel_context *ctx,
{
struct drm_i915_gem_object *ctx_obj;
uint32_t context_size;
- struct intel_ringbuffer *ring;
+ struct intel_ring *ring;
int ret;

WARN_ON(ctx->legacy_hw_ctx.rcs_state != NULL);
@@ -2279,7 +2279,7 @@ int intel_lr_context_deferred_alloc(struct intel_context *ctx,
return -ENOMEM;
}

- ring = intel_engine_create_ringbuffer(engine, 4 * PAGE_SIZE);
+ ring = intel_engine_create_ring(engine, 4 * PAGE_SIZE);
if (IS_ERR(ring)) {
ret = PTR_ERR(ring);
goto error_deref_obj;
@@ -2316,7 +2316,7 @@ int intel_lr_context_deferred_alloc(struct intel_context *ctx,
return 0;

error_ringbuf:
- intel_ringbuffer_free(ring);
+ intel_ring_free(ring);
error_deref_obj:
drm_gem_object_unreference(&ctx_obj->base);
ctx->engine[engine->id].ring = NULL;
@@ -2333,7 +2333,7 @@ void intel_lr_context_reset(struct drm_device *dev,

for_each_ring(unused, dev_priv, i) {
struct drm_i915_gem_object *ctx_obj = ctx->engine[i].state;
- struct intel_ringbuffer *ring = ctx->engine[i].ring;
+ struct intel_ring *ring = ctx->engine[i].ring;
uint32_t *reg_state;
struct page *page;

diff --git a/drivers/gpu/drm/i915/intel_mocs.c b/drivers/gpu/drm/i915/intel_mocs.c
index 61e1704d7313..1b724c0a711e 100644
--- a/drivers/gpu/drm/i915/intel_mocs.c
+++ b/drivers/gpu/drm/i915/intel_mocs.c
@@ -193,7 +193,7 @@ static int emit_mocs_control_table(struct drm_i915_gem_request *req,
const struct drm_i915_mocs_table *table,
enum intel_engine_id id)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
unsigned int index;
int ret;

@@ -244,7 +244,7 @@ static int emit_mocs_control_table(struct drm_i915_gem_request *req,
static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
const struct drm_i915_mocs_table *table)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
unsigned int count;
unsigned int i;
u32 value;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 1bb9f376aa0b..95974156a1d9 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -42,7 +42,7 @@ int __intel_ring_space(int head, int tail, int size)
return space - I915_RING_FREE_SPACE;
}

-void intel_ring_update_space(struct intel_ringbuffer *ringbuf)
+void intel_ring_update_space(struct intel_ring *ringbuf)
{
if (ringbuf->last_retired_head != -1) {
ringbuf->head = ringbuf->last_retired_head;
@@ -53,7 +53,7 @@ void intel_ring_update_space(struct intel_ringbuffer *ringbuf)
ringbuf->tail, ringbuf->size);
}

-int intel_ring_space(struct intel_ringbuffer *ringbuf)
+int intel_ring_space(struct intel_ring *ringbuf)
{
intel_ring_update_space(ringbuf);
return ringbuf->space;
@@ -61,7 +61,7 @@ int intel_ring_space(struct intel_ringbuffer *ringbuf)

static void __intel_ring_advance(struct intel_engine_cs *ring)
{
- struct intel_ringbuffer *ringbuf = ring->buffer;
+ struct intel_ring *ringbuf = ring->buffer;
ringbuf->tail &= ringbuf->size - 1;
ring->write_tail(ring, ringbuf->tail);
}
@@ -71,7 +71,7 @@ gen2_render_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains,
u32 flush_domains)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
u32 cmd;
int ret;

@@ -98,7 +98,7 @@ gen4_render_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains,
u32 flush_domains)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
u32 cmd;
int ret;

@@ -191,7 +191,7 @@ gen4_render_ring_flush(struct drm_i915_gem_request *req,
static int
intel_emit_post_sync_nonzero_flush(struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
u32 scratch_addr = req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
int ret;

@@ -227,7 +227,7 @@ static int
gen6_render_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains, u32 flush_domains)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
u32 flags = 0;
u32 scratch_addr = req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
int ret;
@@ -279,7 +279,7 @@ gen6_render_ring_flush(struct drm_i915_gem_request *req,
static int
gen7_render_ring_cs_stall_wa(struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
int ret;

ret = intel_ring_begin(req, 4);
@@ -300,7 +300,7 @@ static int
gen7_render_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains, u32 flush_domains)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
u32 flags = 0;
u32 scratch_addr = req->engine->scratch.gtt_offset + 2 * CACHELINE_BYTES;
int ret;
@@ -363,7 +363,7 @@ static int
gen8_emit_pipe_control(struct drm_i915_gem_request *req,
u32 flags, u32 scratch_addr)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
int ret;

ret = intel_ring_begin(req, 6);
@@ -547,7 +547,7 @@ static int init_ring_common(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_ringbuffer *ringbuf = ring->buffer;
+ struct intel_ring *ringbuf = ring->buffer;
struct drm_i915_gem_object *obj = ringbuf->obj;
int ret = 0;

@@ -688,7 +688,7 @@ err:

static int intel_ring_workarounds_emit(struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
struct drm_i915_private *dev_priv = req->i915;
struct i915_workarounds *w = &dev_priv->workarounds;
int ret, i;
@@ -1191,7 +1191,7 @@ static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
unsigned int num_dwords)
{
#define MBOX_UPDATE_DWORDS 8
- struct intel_ringbuffer *signaller = signaller_req->ring;
+ struct intel_ring *signaller = signaller_req->ring;
struct drm_i915_private *dev_priv = signaller_req->i915;
struct intel_engine_cs *waiter;
int i, ret, num_rings;
@@ -1229,7 +1229,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
unsigned int num_dwords)
{
#define MBOX_UPDATE_DWORDS 6
- struct intel_ringbuffer *signaller = signaller_req->ring;
+ struct intel_ring *signaller = signaller_req->ring;
struct drm_i915_private *dev_priv = signaller_req->i915;
struct intel_engine_cs *waiter;
int i, ret, num_rings;
@@ -1264,7 +1264,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
static int gen6_signal(struct drm_i915_gem_request *signaller_req,
unsigned int num_dwords)
{
- struct intel_ringbuffer *signaller = signaller_req->ring;
+ struct intel_ring *signaller = signaller_req->ring;
struct drm_i915_private *dev_priv = signaller_req->i915;
struct intel_engine_cs *useless;
int i, ret, num_rings;
@@ -1306,7 +1306,7 @@ static int gen6_signal(struct drm_i915_gem_request *signaller_req,
static int
gen6_add_request(struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
int ret;

if (req->engine->semaphore.signal)
@@ -1345,7 +1345,7 @@ gen8_ring_sync(struct drm_i915_gem_request *waiter_req,
struct intel_engine_cs *signaller,
u32 seqno)
{
- struct intel_ringbuffer *waiter = waiter_req->ring;
+ struct intel_ring *waiter = waiter_req->ring;
struct drm_i915_private *dev_priv = waiter_req->i915;
int ret;

@@ -1373,7 +1373,7 @@ gen6_ring_sync(struct drm_i915_gem_request *waiter_req,
struct intel_engine_cs *signaller,
u32 seqno)
{
- struct intel_ringbuffer *waiter = waiter_req->ring;
+ struct intel_ring *waiter = waiter_req->ring;
u32 dw1 = MI_SEMAPHORE_MBOX |
MI_SEMAPHORE_COMPARE |
MI_SEMAPHORE_REGISTER;
@@ -1421,7 +1421,7 @@ do { \
static int
pc_render_add_request(struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
u32 addr = req->engine->status_page.gfx_addr +
(I915_GEM_HWS_INDEX << MI_STORE_DWORD_INDEX_SHIFT);
u32 scratch_addr = addr;
@@ -1548,7 +1548,7 @@ bsd_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate_domains,
u32 flush_domains)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
int ret;

ret = intel_ring_begin(req, 2);
@@ -1564,7 +1564,7 @@ bsd_ring_flush(struct drm_i915_gem_request *req,
static int
i9xx_add_request(struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
int ret;

ret = intel_ring_begin(req, 4);
@@ -1658,7 +1658,7 @@ i965_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 length,
unsigned dispatch_flags)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
int ret;

ret = intel_ring_begin(req, 2);
@@ -1685,7 +1685,7 @@ i830_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
u32 cs_offset = req->engine->scratch.gtt_offset;
int ret;

@@ -1748,7 +1748,7 @@ i915_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
int ret;

ret = intel_ring_begin(req, 2);
@@ -1845,7 +1845,7 @@ static int init_phys_status_page(struct intel_engine_cs *ring)
return 0;
}

-void intel_unpin_ringbuffer_obj(struct intel_ringbuffer *ringbuf)
+void intel_unpin_ring(struct intel_ring *ringbuf)
{
if (HAS_LLC(ringbuf->obj->base.dev) && !ringbuf->obj->stolen)
i915_gem_object_unpin_vmap(ringbuf->obj);
@@ -1854,8 +1854,7 @@ void intel_unpin_ringbuffer_obj(struct intel_ringbuffer *ringbuf)
i915_gem_object_ggtt_unpin(ringbuf->obj);
}

-int intel_pin_and_map_ringbuffer_obj(struct drm_device *dev,
- struct intel_ringbuffer *ringbuf)
+int intel_pin_and_map_ring(struct drm_device *dev, struct intel_ring *ringbuf)
{
struct drm_i915_private *dev_priv = to_i915(dev);
struct drm_i915_gem_object *obj = ringbuf->obj;
@@ -1900,14 +1899,14 @@ unpin:
return ret;
}

-static void intel_destroy_ringbuffer_obj(struct intel_ringbuffer *ringbuf)
+static void intel_destroy_ringbuffer_obj(struct intel_ring *ringbuf)
{
drm_gem_object_unreference(&ringbuf->obj->base);
ringbuf->obj = NULL;
}

static int intel_alloc_ringbuffer_obj(struct drm_device *dev,
- struct intel_ringbuffer *ringbuf)
+ struct intel_ring *ringbuf)
{
struct drm_i915_gem_object *obj;

@@ -1927,10 +1926,10 @@ static int intel_alloc_ringbuffer_obj(struct drm_device *dev,
return 0;
}

-struct intel_ringbuffer *
-intel_engine_create_ringbuffer(struct intel_engine_cs *engine, int size)
+struct intel_ring *
+intel_engine_create_ring(struct intel_engine_cs *engine, int size)
{
- struct intel_ringbuffer *ring;
+ struct intel_ring *ring;
int ret;

ring = kzalloc(sizeof(*ring), GFP_KERNEL);
@@ -1968,7 +1967,7 @@ intel_engine_create_ringbuffer(struct intel_engine_cs *engine, int size)
}

void
-intel_ringbuffer_free(struct intel_ringbuffer *ring)
+intel_ring_free(struct intel_ring *ring)
{
intel_destroy_ringbuffer_obj(ring);
list_del(&ring->link);
@@ -1978,7 +1977,7 @@ intel_ringbuffer_free(struct intel_ringbuffer *ring)
static int intel_init_engine(struct drm_device *dev,
struct intel_engine_cs *engine)
{
- struct intel_ringbuffer *ringbuf;
+ struct intel_ring *ringbuf;
int ret;

WARN_ON(engine->buffer);
@@ -1995,7 +1994,7 @@ static int intel_init_engine(struct drm_device *dev,

intel_engine_init_breadcrumbs(engine);

- ringbuf = intel_engine_create_ringbuffer(engine, 32 * PAGE_SIZE);
+ ringbuf = intel_engine_create_ring(engine, 32 * PAGE_SIZE);
if (IS_ERR(ringbuf)) {
ret = PTR_ERR(ringbuf);
goto error;
@@ -2013,7 +2012,7 @@ static int intel_init_engine(struct drm_device *dev,
goto error;
}

- ret = intel_pin_and_map_ringbuffer_obj(dev, ringbuf);
+ ret = intel_pin_and_map_ring(dev, ringbuf);
if (ret) {
DRM_ERROR("Failed to pin and map ringbuffer %s: %d\n",
engine->name, ret);
@@ -2043,8 +2042,8 @@ void intel_engine_cleanup(struct intel_engine_cs *ring)
intel_engine_stop(ring);
WARN_ON(!IS_GEN2(ring->dev) && (I915_READ_MODE(ring) & MODE_IDLE) == 0);

- intel_unpin_ringbuffer_obj(ring->buffer);
- intel_ringbuffer_free(ring->buffer);
+ intel_unpin_ring(ring->buffer);
+ intel_ring_free(ring->buffer);
ring->buffer = NULL;
}

@@ -2084,7 +2083,7 @@ int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request)
return 0;
}

-void intel_ring_reserved_space_reserve(struct intel_ringbuffer *ringbuf, int size)
+void intel_ring_reserved_space_reserve(struct intel_ring *ringbuf, int size)
{
WARN_ON(ringbuf->reserved_size);
WARN_ON(ringbuf->reserved_in_use);
@@ -2092,7 +2091,7 @@ void intel_ring_reserved_space_reserve(struct intel_ringbuffer *ringbuf, int siz
ringbuf->reserved_size = size;
}

-void intel_ring_reserved_space_cancel(struct intel_ringbuffer *ringbuf)
+void intel_ring_reserved_space_cancel(struct intel_ring *ringbuf)
{
WARN_ON(ringbuf->reserved_in_use);

@@ -2100,7 +2099,7 @@ void intel_ring_reserved_space_cancel(struct intel_ringbuffer *ringbuf)
ringbuf->reserved_in_use = false;
}

-void intel_ring_reserved_space_use(struct intel_ringbuffer *ringbuf)
+void intel_ring_reserved_space_use(struct intel_ring *ringbuf)
{
WARN_ON(ringbuf->reserved_in_use);

@@ -2108,7 +2107,7 @@ void intel_ring_reserved_space_use(struct intel_ringbuffer *ringbuf)
ringbuf->reserved_tail = ringbuf->tail;
}

-void intel_ring_reserved_space_end(struct intel_ringbuffer *ringbuf)
+void intel_ring_reserved_space_end(struct intel_ring *ringbuf)
{
WARN_ON(!ringbuf->reserved_in_use);
if (ringbuf->tail > ringbuf->reserved_tail) {
@@ -2133,7 +2132,7 @@ void intel_ring_reserved_space_end(struct intel_ringbuffer *ringbuf)

static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
struct intel_engine_cs *engine = req->engine;
struct drm_i915_gem_request *target;
unsigned space;
@@ -2172,7 +2171,7 @@ static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
return 0;
}

-static void ring_wrap(struct intel_ringbuffer *ringbuf)
+static void ring_wrap(struct intel_ring *ringbuf)
{
int rem = ringbuf->size - ringbuf->tail;
memset(ringbuf->virtual_start + ringbuf->tail, 0, rem);
@@ -2183,7 +2182,7 @@ static void ring_wrap(struct intel_ringbuffer *ringbuf)

static int ring_prepare(struct drm_i915_gem_request *req, int bytes)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
int remain_usable = ring->effective_size - ring->tail;
int remain_actual = ring->size - ring->tail;
int ret, total_bytes, wait_bytes = 0;
@@ -2243,7 +2242,7 @@ int intel_ring_begin(struct drm_i915_gem_request *req, int num_dwords)
/* Align the ring tail to a cacheline boundary */
int intel_ring_cacheline_align(struct drm_i915_gem_request *req)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
int num_dwords = (ring->tail & (CACHELINE_BYTES - 1)) / sizeof(uint32_t);
int ret;

@@ -2318,7 +2317,7 @@ static void gen6_bsd_ring_write_tail(struct intel_engine_cs *ring,
static int gen6_bsd_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate, u32 flush)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
uint32_t cmd;
int ret;

@@ -2364,7 +2363,7 @@ gen8_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
bool ppgtt = USES_PPGTT(req->i915) &&
!(dispatch_flags & I915_DISPATCH_SECURE);
int ret;
@@ -2390,7 +2389,7 @@ hsw_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
int ret;

ret = intel_ring_begin(req, 2);
@@ -2415,7 +2414,7 @@ gen6_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
u64 offset, u32 len,
unsigned dispatch_flags)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
int ret;

ret = intel_ring_begin(req, 2);
@@ -2438,7 +2437,7 @@ gen6_ring_dispatch_execbuffer(struct drm_i915_gem_request *req,
static int gen6_ring_flush(struct drm_i915_gem_request *req,
u32 invalidate, u32 flush)
{
- struct intel_ringbuffer *ring = req->ring;
+ struct intel_ring *ring = req->ring;
uint32_t cmd;
int ret;

diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 6803e4820688..71941af13560 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -97,7 +97,7 @@ struct intel_engine_hangcheck {
u32 instdone[I915_NUM_INSTDONE_REG];
};

-struct intel_ringbuffer {
+struct intel_ring {
struct drm_i915_gem_object *obj;
void *virtual_start;

@@ -163,7 +163,7 @@ struct intel_engine_cs {
u32 mmio_base;
struct drm_device *dev;
struct drm_i915_private *i915;
- struct intel_ringbuffer *buffer;
+ struct intel_ring *buffer;
struct list_head buffers;

/* Rather than have every client wait upon all user interrupts,
@@ -454,12 +454,11 @@ intel_write_status_page(struct intel_engine_cs *ring,
#define I915_GEM_HWS_SCRATCH_INDEX 0x40
#define I915_GEM_HWS_SCRATCH_ADDR (I915_GEM_HWS_SCRATCH_INDEX << MI_STORE_DWORD_INDEX_SHIFT)

-struct intel_ringbuffer *
-intel_engine_create_ringbuffer(struct intel_engine_cs *engine, int size);
-int intel_pin_and_map_ringbuffer_obj(struct drm_device *dev,
- struct intel_ringbuffer *ringbuf);
-void intel_unpin_ringbuffer_obj(struct intel_ringbuffer *ringbuf);
-void intel_ringbuffer_free(struct intel_ringbuffer *ring);
+struct intel_ring *
+intel_engine_create_ring(struct intel_engine_cs *engine, int size);
+int intel_pin_and_map_ring(struct drm_device *dev, struct intel_ring *ring);
+void intel_unpin_ring(struct intel_ring *ring);
+void intel_ring_free(struct intel_ring *ring);

void intel_engine_stop(struct intel_engine_cs *ring);
void intel_engine_cleanup(struct intel_engine_cs *ring);
@@ -468,24 +467,22 @@ int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request);

int __must_check intel_ring_begin(struct drm_i915_gem_request *req, int n);
int __must_check intel_ring_cacheline_align(struct drm_i915_gem_request *req);
-static inline void intel_ring_emit(struct intel_ringbuffer *rb,
- u32 data)
+static inline void intel_ring_emit(struct intel_ring *ring, u32 data)
{
- *(uint32_t *)(rb->virtual_start + rb->tail) = data;
- rb->tail += 4;
+ *(uint32_t *)(ring->virtual_start + ring->tail) = data;
+ ring->tail += 4;
}
-static inline void intel_ring_emit_reg(struct intel_ringbuffer *rb,
- i915_reg_t reg)
+static inline void intel_ring_emit_reg(struct intel_ring *ring, i915_reg_t reg)
{
- intel_ring_emit(rb, i915_mmio_reg_offset(reg));
+ intel_ring_emit(ring, i915_mmio_reg_offset(reg));
}
-static inline void intel_ring_advance(struct intel_ringbuffer *rb)
+static inline void intel_ring_advance(struct intel_ring *ring)
{
- rb->tail &= rb->size - 1;
+ ring->tail &= ring->size - 1;
}
int __intel_ring_space(int head, int tail, int size);
-void intel_ring_update_space(struct intel_ringbuffer *ringbuf);
-int intel_ring_space(struct intel_ringbuffer *ringbuf);
+void intel_ring_update_space(struct intel_ring *ringbuf);
+int intel_ring_space(struct intel_ring *ringbuf);

int __must_check intel_engine_idle(struct intel_engine_cs *ring);
void intel_engine_init_seqno(struct intel_engine_cs *ring, u32 seqno);
@@ -509,7 +506,7 @@ static inline u32 intel_engine_get_seqno(struct intel_engine_cs *ring)

int init_workarounds_ring(struct intel_engine_cs *ring);

-static inline u32 intel_ring_get_tail(struct intel_ringbuffer *ringbuf)
+static inline u32 intel_ring_get_tail(struct intel_ring *ringbuf)
{
return ringbuf->tail;
}
@@ -528,13 +525,13 @@ static inline u32 intel_ring_get_tail(struct intel_ringbuffer *ringbuf)
* will always have sufficient room to do its stuff. The request creation
* code calls this automatically.
*/
-void intel_ring_reserved_space_reserve(struct intel_ringbuffer *ringbuf, int size);
+void intel_ring_reserved_space_reserve(struct intel_ring *ringbuf, int size);
/* Cancel the reservation, e.g. because the request is being discarded. */
-void intel_ring_reserved_space_cancel(struct intel_ringbuffer *ringbuf);
+void intel_ring_reserved_space_cancel(struct intel_ring *ringbuf);
/* Use the reserved space - for use by i915_add_request() only. */
-void intel_ring_reserved_space_use(struct intel_ringbuffer *ringbuf);
+void intel_ring_reserved_space_use(struct intel_ring *ringbuf);
/* Finish with the reserved space - for use by i915_add_request() only. */
-void intel_ring_reserved_space_end(struct intel_ringbuffer *ringbuf);
+void intel_ring_reserved_space_end(struct intel_ring *ringbuf);

/* intel_breadcrumbs.c -- user interrupt bottom-half for waiters */
struct intel_wait {
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:03 UTC
Permalink
Ringbuffers are now being written to either through LLC or WC paths, so
treating them as simply iomem is no longer adequate. However, for the
older !llc hardware, the hardware is documentated as treating the TAIL
register update as serialising, so we can relax the barriers when filling
the rings (but even if it were not, it is still an uncached register write
and so serialising anyway.).

For simplicity, let's ignore the iomem annotation.

v2: Remove iomem from ringbuffer->virtual_address

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Reviewed-by: Ville Syrjälä <***@linux.intel.com>
---
drivers/gpu/drm/i915/intel_lrc.c | 7 +------
drivers/gpu/drm/i915/intel_lrc.h | 6 +++---
drivers/gpu/drm/i915/intel_ringbuffer.c | 7 +------
drivers/gpu/drm/i915/intel_ringbuffer.h | 19 +++++++++++++------
4 files changed, 18 insertions(+), 21 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 433e9f60e926..527eaf59be25 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -766,13 +766,8 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)

static void __wrap_ring_buffer(struct intel_ringbuffer *ringbuf)
{
- uint32_t __iomem *virt;
int rem = ringbuf->size - ringbuf->tail;
-
- virt = ringbuf->virtual_start + ringbuf->tail;
- rem /= 4;
- while (rem--)
- iowrite32(MI_NOOP, virt++);
+ memset(ringbuf->virtual_start + ringbuf->tail, 0, rem);

ringbuf->tail = 0;
intel_ring_update_space(ringbuf);
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index de41ad6cd63d..1e58f2550777 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -71,8 +71,9 @@ int logical_ring_flush_all_caches(struct drm_i915_gem_request *req);
*/
static inline void intel_logical_ring_advance(struct intel_ringbuffer *ringbuf)
{
- ringbuf->tail &= ringbuf->size - 1;
+ intel_ringbuffer_advance(ringbuf);
}
+
/**
* intel_logical_ring_emit() - write a DWORD to the ringbuffer.
* @ringbuf: Ringbuffer to write to.
@@ -81,8 +82,7 @@ static inline void intel_logical_ring_advance(struct intel_ringbuffer *ringbuf)
static inline void intel_logical_ring_emit(struct intel_ringbuffer *ringbuf,
u32 data)
{
- iowrite32(data, ringbuf->virtual_start + ringbuf->tail);
- ringbuf->tail += 4;
+ intel_ringbuffer_emit(ringbuf, data);
}
static inline void intel_logical_ring_emit_reg(struct intel_ringbuffer *ringbuf,
i915_reg_t reg)
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 2728c0ca0871..02b7032e16e0 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -2099,13 +2099,8 @@ static int ring_wait_for_space(struct intel_engine_cs *ring, int n)

static void __wrap_ring_buffer(struct intel_ringbuffer *ringbuf)
{
- uint32_t __iomem *virt;
int rem = ringbuf->size - ringbuf->tail;
-
- virt = ringbuf->virtual_start + ringbuf->tail;
- rem /= 4;
- while (rem--)
- iowrite32(MI_NOOP, virt++);
+ memset(ringbuf->virtual_start + ringbuf->tail, 0, rem);

ringbuf->tail = 0;
intel_ring_update_space(ringbuf);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index a1fcb6c7501f..7669a8d30f27 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -99,7 +99,7 @@ struct intel_ring_hangcheck {

struct intel_ringbuffer {
struct drm_i915_gem_object *obj;
- void __iomem *virtual_start;
+ void *virtual_start;

struct intel_engine_cs *ring;
struct list_head link;
@@ -468,12 +468,20 @@ int intel_ring_alloc_request_extras(struct drm_i915_gem_request *request);

int __must_check intel_ring_begin(struct drm_i915_gem_request *req, int n);
int __must_check intel_ring_cacheline_align(struct drm_i915_gem_request *req);
+static inline void intel_ringbuffer_emit(struct intel_ringbuffer *rb,
+ u32 data)
+{
+ *(uint32_t *)(rb->virtual_start + rb->tail) = data;
+ rb->tail += 4;
+}
+static inline void intel_ringbuffer_advance(struct intel_ringbuffer *rb)
+{
+ rb->tail &= rb->size - 1;
+}
static inline void intel_ring_emit(struct intel_engine_cs *ring,
u32 data)
{
- struct intel_ringbuffer *ringbuf = ring->buffer;
- iowrite32(data, ringbuf->virtual_start + ringbuf->tail);
- ringbuf->tail += 4;
+ intel_ringbuffer_emit(ring->buffer, data);
}
static inline void intel_ring_emit_reg(struct intel_engine_cs *ring,
i915_reg_t reg)
@@ -482,8 +490,7 @@ static inline void intel_ring_emit_reg(struct intel_engine_cs *ring,
}
static inline void intel_ring_advance(struct intel_engine_cs *ring)
{
- struct intel_ringbuffer *ringbuf = ring->buffer;
- ringbuf->tail &= ringbuf->size - 1;
+ intel_ringbuffer_advance(ring->buffer);
}
int __intel_ring_space(int head, int tail, int size);
void intel_ring_update_space(struct intel_ringbuffer *ringbuf);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:36 UTC
Permalink
In order to prevent a leak of the vma on shared objects, we need to
hook into the object_close callback to destroy the vma on the object for
this file. However, if we destroyed that vma immediately we may cause
unexpected application stalls as we try to unbind a busy vma - hence we
defer the unbind to when we retire the vma.

v2: Keep vma allocated until closed. This is useful for a later
optimisation, but it is required now in order to handle potential
recursion of i915_vma_unbind() by retiring itself.
v3: Comments are important.

Testcase: igt/gem_ppggtt/flink-and-close-vma-leak
Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <***@linux.intel.com>
Cc: Daniele Ceraolo Spurio <***@intel.com
---
drivers/gpu/drm/i915/i915_drv.c | 1 +
drivers/gpu/drm/i915/i915_drv.h | 2 +-
drivers/gpu/drm/i915/i915_gem.c | 126 ++++++++++++++++++++++--------------
drivers/gpu/drm/i915/i915_gem_gtt.c | 2 +
drivers/gpu/drm/i915/i915_gem_gtt.h | 1 +
5 files changed, 84 insertions(+), 48 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index cc831a34f7bb..2a0882647c23 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -1664,6 +1664,7 @@ static struct drm_driver driver = {
.debugfs_init = i915_debugfs_init,
.debugfs_cleanup = i915_debugfs_cleanup,
#endif
+ .gem_close_object = i915_gem_close_object,
.gem_free_object = i915_gem_free_object,
.gem_vm_ops = &i915_gem_vm_ops,

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 9fa925389332..262d1b247344 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2673,8 +2673,8 @@ struct drm_i915_gem_object *i915_gem_alloc_object(struct drm_device *dev,
size_t size);
struct drm_i915_gem_object *i915_gem_object_create_from_data(
struct drm_device *dev, const void *data, size_t size);
+void i915_gem_close_object(struct drm_gem_object *gem, struct drm_file *file);
void i915_gem_free_object(struct drm_gem_object *obj);
-void i915_gem_vma_destroy(struct i915_vma *vma);

/* Flags used by pin/bind&friends. */
#define PIN_MAPPABLE (1<<0)
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 7e4f7f2d18e4..1f95cf39b7d2 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2385,6 +2385,30 @@ i915_gem_object_flush_active(struct drm_i915_gem_object *obj)
}
}

+static void i915_vma_close(struct i915_vma *vma)
+{
+ GEM_BUG_ON(vma->closed);
+ vma->closed = true;
+
+ list_del_init(&vma->obj_link);
+ if (!vma->active)
+ WARN_ON(i915_vma_unbind(vma));
+}
+
+void i915_gem_close_object(struct drm_gem_object *gem,
+ struct drm_file *file)
+{
+ struct drm_i915_gem_object *obj = to_intel_bo(gem);
+ struct drm_i915_file_private *fpriv = file->driver_priv;
+ struct i915_vma *vma, *vn;
+
+ mutex_lock(&obj->base.dev->struct_mutex);
+ list_for_each_entry_safe(vma, vn, &obj->vma_list, obj_link)
+ if (vma->vm->file == fpriv)
+ i915_vma_close(vma);
+ mutex_unlock(&obj->base.dev->struct_mutex);
+}
+
/**
* i915_gem_wait_ioctl - implements DRM_IOCTL_I915_GEM_WAIT
* @DRM_IOCTL_ARGS: standard ioctl arguments
@@ -2571,31 +2595,56 @@ static void i915_gem_object_finish_gtt(struct drm_i915_gem_object *obj)
old_write_domain);
}

+static void i915_vma_destroy(struct i915_vma *vma)
+{
+ GEM_BUG_ON(vma->node.allocated);
+ GEM_BUG_ON(vma->active);
+ GEM_BUG_ON(!vma->closed);
+
+ list_del(&vma->vm_link);
+ if (!vma->is_ggtt)
+ i915_ppgtt_put(i915_vm_to_ppgtt(vma->vm));
+
+ kmem_cache_free(to_i915(vma->obj->base.dev)->vmas, vma);
+}
+
static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
{
struct drm_i915_gem_object *obj = vma->obj;
- int ret;
+ int ret, i;

- if (list_empty(&vma->obj_link))
- return 0;
+ /* First wait upon any activity as retiring the request may
+ * have side-effects such as unpinning or even unbinding this vma.
+ */
+ if (vma->active && wait) {
+ bool was_closed;

- if (!drm_mm_node_allocated(&vma->node)) {
- i915_gem_vma_destroy(vma);
- return 0;
+ /* When a closed VMA is retired, it is unbound - eek. */
+ was_closed = vma->closed;
+ vma->closed = false;
+
+ for (i = 0; i < ARRAY_SIZE(vma->last_read); i++) {
+ ret = i915_wait_request(vma->last_read[i].request);
+ if (ret)
+ break;
+ }
+
+ vma->closed = was_closed;
+ if (ret)
+ return ret;
+
+ GEM_BUG_ON(vma->active);
}

if (vma->pin_count)
return -EBUSY;

+ if (!drm_mm_node_allocated(&vma->node))
+ goto destroy;
+
GEM_BUG_ON(obj->bind_count == 0);
GEM_BUG_ON(obj->pages == NULL);

- if (wait) {
- ret = i915_gem_object_wait_rendering(obj, false);
- if (ret)
- return ret;
- }
-
if (vma->is_ggtt && vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL) {
i915_gem_object_finish_gtt(obj);

@@ -2622,7 +2671,6 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
}

drm_mm_remove_node(&vma->node);
- i915_gem_vma_destroy(vma);

/* Since the unbound list is global, only move to that list if
* no more VMAs exist. */
@@ -2636,6 +2684,10 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
*/
i915_gem_object_unpin_pages(obj);

+destroy:
+ if (unlikely(vma->closed))
+ i915_vma_destroy(vma);
+
return 0;
}

@@ -2814,7 +2866,7 @@ i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,

if (offset & (alignment - 1) || offset + size > end) {
ret = -EINVAL;
- goto err_free_vma;
+ goto err_vma;
}
vma->node.start = offset;
vma->node.size = size;
@@ -2826,7 +2878,7 @@ i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,
ret = drm_mm_reserve_node(&vm->mm, &vma->node);
}
if (ret)
- goto err_free_vma;
+ goto err_vma;
} else {
if (flags & PIN_HIGH) {
search_flag = DRM_MM_SEARCH_BELOW;
@@ -2851,7 +2903,7 @@ search_free:
if (ret == 0)
goto search_free;

- goto err_free_vma;
+ goto err_vma;
}
}
if (WARN_ON(!i915_gem_valid_gtt_space(vma, obj->cache_level))) {
@@ -2872,8 +2924,7 @@ search_free:

err_remove_node:
drm_mm_remove_node(&vma->node);
-err_free_vma:
- i915_gem_vma_destroy(vma);
+err_vma:
vma = ERR_PTR(ret);
err_unpin:
i915_gem_object_unpin_pages(obj);
@@ -3808,21 +3859,18 @@ void i915_gem_free_object(struct drm_gem_object *gem_obj)

trace_i915_gem_object_destroy(obj);

+ /* All file-owned VMA should have been released by this point through
+ * i915_gem_close_object(), or earlier by i915_gem_context_close().
+ * However, the object may also be bound into the global GTT (e.g.
+ * older GPUs without per-process support, or for direct access through
+ * the GTT either for the user or for scanout). Those VMA still need to
+ * unbound now.
+ */
list_for_each_entry_safe(vma, next, &obj->vma_list, obj_link) {
- int ret;
-
+ GEM_BUG_ON(!i915_is_ggtt(vma->vm));
+ GEM_BUG_ON(vma->active);
vma->pin_count = 0;
- ret = i915_vma_unbind(vma);
- if (WARN_ON(ret == -ERESTARTSYS)) {
- bool was_interruptible;
-
- was_interruptible = dev_priv->mm.interruptible;
- dev_priv->mm.interruptible = false;
-
- WARN_ON(i915_vma_unbind(vma));
-
- dev_priv->mm.interruptible = was_interruptible;
- }
+ i915_vma_close(vma);
}
GEM_BUG_ON(obj->bind_count);

@@ -3890,22 +3938,6 @@ struct i915_vma *i915_gem_obj_to_ggtt_view(struct drm_i915_gem_object *obj,
return NULL;
}

-void i915_gem_vma_destroy(struct i915_vma *vma)
-{
- WARN_ON(vma->node.allocated);
-
- /* Keep the vma as a placeholder in the execbuffer reservation lists */
- if (!list_empty(&vma->exec_list))
- return;
-
- if (!vma->is_ggtt)
- i915_ppgtt_put(i915_vm_to_ppgtt(vma->vm));
-
- list_del(&vma->obj_link);
-
- kmem_cache_free(to_i915(vma->obj->base.dev)->vmas, vma);
-}
-
static void
i915_gem_stop_ringbuffers(struct drm_device *dev)
{
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index fd42b6491d28..ef093db6b8a6 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -3257,6 +3257,8 @@ i915_vma_retire(struct i915_gem_active *active,
return;

list_move_tail(&vma->vm_link, &vma->vm->inactive_list);
+ if (unlikely(vma->closed))
+ WARN_ON(i915_vma_unbind(vma));
}

static struct i915_vma *
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 0a7867fa5a1f..d68d5fd02923 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -190,6 +190,7 @@ struct i915_vma {
unsigned int bound : 4;
unsigned int active : I915_NUM_RINGS;
bool is_ggtt : 1;
+ bool closed : 1;

/**
* Support different GGTT views into the same object.
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:15 UTC
Permalink
For more consistent oop-naming, we would use intel_ring_verb, so pick
intel_ring_map().

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/intel_lrc.c | 6 ++---
drivers/gpu/drm/i915/intel_ringbuffer.c | 44 ++++++++++++++++-----------------
drivers/gpu/drm/i915/intel_ringbuffer.h | 4 +--
3 files changed, 27 insertions(+), 27 deletions(-)

diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index fa4c0c0db994..3a80d9d45f5c 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -889,7 +889,7 @@ static int intel_lr_context_do_pin(struct intel_engine_cs *ring,
if (ret)
return ret;

- ret = intel_pin_and_map_ring(ring->dev, ringbuf);
+ ret = intel_ring_map(ringbuf);
if (ret)
goto unpin_ctx_obj;

@@ -936,7 +936,7 @@ void intel_lr_context_unpin(struct drm_i915_gem_request *rq)
if (ctx_obj) {
WARN_ON(!mutex_is_locked(&rq->i915->dev->struct_mutex));
if (--rq->ctx->engine[engine].pin_count == 0) {
- intel_unpin_ring(ring);
+ intel_ring_unmap(ring);
i915_gem_object_ggtt_unpin(ctx_obj);
i915_gem_context_unreference(rq->ctx);
}
@@ -2178,7 +2178,7 @@ void intel_lr_context_free(struct intel_context *ctx)
struct intel_engine_cs *engine = ring->engine;

if (ctx == engine->default_context) {
- intel_unpin_ring(ring);
+ intel_ring_unmap(ring);
i915_gem_object_ggtt_unpin(ctx_obj);
}
WARN_ON(ctx->engine[engine->id].pin_count);
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 95974156a1d9..74a4a54e6ca5 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -1845,22 +1845,12 @@ static int init_phys_status_page(struct intel_engine_cs *ring)
return 0;
}

-void intel_unpin_ring(struct intel_ring *ringbuf)
+int intel_ring_map(struct intel_ring *ring)
{
- if (HAS_LLC(ringbuf->obj->base.dev) && !ringbuf->obj->stolen)
- i915_gem_object_unpin_vmap(ringbuf->obj);
- else
- iounmap(ringbuf->virtual_start);
- i915_gem_object_ggtt_unpin(ringbuf->obj);
-}
-
-int intel_pin_and_map_ring(struct drm_device *dev, struct intel_ring *ringbuf)
-{
- struct drm_i915_private *dev_priv = to_i915(dev);
- struct drm_i915_gem_object *obj = ringbuf->obj;
+ struct drm_i915_gem_object *obj = ring->obj;
int ret;

- if (HAS_LLC(dev_priv) && !obj->stolen) {
+ if (HAS_LLC(ring->engine->i915) && !obj->stolen) {
ret = i915_gem_obj_ggtt_pin(obj, PAGE_SIZE, 0);
if (ret)
return ret;
@@ -1869,10 +1859,10 @@ int intel_pin_and_map_ring(struct drm_device *dev, struct intel_ring *ringbuf)
if (ret)
goto unpin;

- ringbuf->virtual_start = i915_gem_object_pin_vmap(obj);
- if (IS_ERR(ringbuf->virtual_start)) {
- ret = PTR_ERR(ringbuf->virtual_start);
- ringbuf->virtual_start = NULL;
+ ring->virtual_start = i915_gem_object_pin_vmap(obj);
+ if (IS_ERR(ring->virtual_start)) {
+ ret = PTR_ERR(ring->virtual_start);
+ ring->virtual_start = NULL;
goto unpin;
}
} else {
@@ -1884,9 +1874,10 @@ int intel_pin_and_map_ring(struct drm_device *dev, struct intel_ring *ringbuf)
if (ret)
goto unpin;

- ringbuf->virtual_start = ioremap_wc(dev_priv->gtt.mappable_base +
- i915_gem_obj_ggtt_offset(obj), ringbuf->size);
- if (ringbuf->virtual_start == NULL) {
+ ring->virtual_start = ioremap_wc(ring->engine->i915->gtt.mappable_base +
+ i915_gem_obj_ggtt_offset(obj),
+ ring->size);
+ if (ring->virtual_start == NULL) {
ret = -ENOMEM;
goto unpin;
}
@@ -1899,6 +1890,15 @@ unpin:
return ret;
}

+void intel_ring_unmap(struct intel_ring *ring)
+{
+ if (HAS_LLC(ring->engine->i915) && !ring->obj->stolen)
+ i915_gem_object_unpin_vmap(ring->obj);
+ else
+ iounmap(ring->virtual_start);
+ i915_gem_object_ggtt_unpin(ring->obj);
+}
+
static void intel_destroy_ringbuffer_obj(struct intel_ring *ringbuf)
{
drm_gem_object_unreference(&ringbuf->obj->base);
@@ -2012,7 +2012,7 @@ static int intel_init_engine(struct drm_device *dev,
goto error;
}

- ret = intel_pin_and_map_ring(dev, ringbuf);
+ ret = intel_ring_map(ringbuf);
if (ret) {
DRM_ERROR("Failed to pin and map ringbuffer %s: %d\n",
engine->name, ret);
@@ -2042,7 +2042,7 @@ void intel_engine_cleanup(struct intel_engine_cs *ring)
intel_engine_stop(ring);
WARN_ON(!IS_GEN2(ring->dev) && (I915_READ_MODE(ring) & MODE_IDLE) == 0);

- intel_unpin_ring(ring->buffer);
+ intel_ring_unmap(ring->buffer);
intel_ring_free(ring->buffer);
ring->buffer = NULL;
}
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 71941af13560..15d067b9b8a2 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -456,8 +456,8 @@ intel_write_status_page(struct intel_engine_cs *ring,

struct intel_ring *
intel_engine_create_ring(struct intel_engine_cs *engine, int size);
-int intel_pin_and_map_ring(struct drm_device *dev, struct intel_ring *ring);
-void intel_unpin_ring(struct intel_ring *ring);
+int intel_ring_map(struct intel_ring *ring);
+void intel_ring_unmap(struct intel_ring *ring);
void intel_ring_free(struct intel_ring *ring);

void intel_engine_stop(struct intel_engine_cs *ring);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:27 UTC
Permalink
Elsewhere we have adopted the convention of using '_link' to denote
elements in the list (and '_list' for the actual list_head itself), and
that the name should indicate which list the link belongs to (and
preferrably not just where the link is being stored).

s/vma_link/obj_link/ (we iterate over obj->vma_list)
s/mm_list/vm_link/ (we iterate over vm->[in]active_list)

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 17 +++++------
drivers/gpu/drm/i915/i915_gem.c | 50 ++++++++++++++++----------------
drivers/gpu/drm/i915/i915_gem_context.c | 2 +-
drivers/gpu/drm/i915/i915_gem_evict.c | 6 ++--
drivers/gpu/drm/i915/i915_gem_gtt.c | 10 +++----
drivers/gpu/drm/i915/i915_gem_gtt.h | 4 +--
drivers/gpu/drm/i915/i915_gem_shrinker.c | 4 +--
drivers/gpu/drm/i915/i915_gem_stolen.c | 2 +-
drivers/gpu/drm/i915/i915_gem_userptr.c | 2 +-
drivers/gpu/drm/i915/i915_gpu_error.c | 8 ++---
10 files changed, 52 insertions(+), 53 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index efa9572fc217..f311df758195 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -117,9 +117,8 @@ static u64 i915_gem_obj_total_ggtt_size(struct drm_i915_gem_object *obj)
u64 size = 0;
struct i915_vma *vma;

- list_for_each_entry(vma, &obj->vma_list, vma_link) {
- if (i915_is_ggtt(vma->vm) &&
- drm_mm_node_allocated(&vma->node))
+ list_for_each_entry(vma, &obj->vma_list, obj_link) {
+ if (i915_is_ggtt(vma->vm) && drm_mm_node_allocated(&vma->node))
size += vma->node.size;
}

@@ -155,7 +154,7 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
obj->madv == I915_MADV_DONTNEED ? " purgeable" : "");
if (obj->base.name)
seq_printf(m, " (name: %d)", obj->base.name);
- list_for_each_entry(vma, &obj->vma_list, vma_link) {
+ list_for_each_entry(vma, &obj->vma_list, obj_link) {
if (vma->pin_count > 0)
pin_count++;
}
@@ -164,7 +163,7 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
seq_printf(m, " (display)");
if (obj->fence_reg != I915_FENCE_REG_NONE)
seq_printf(m, " (fence: %d)", obj->fence_reg);
- list_for_each_entry(vma, &obj->vma_list, vma_link) {
+ list_for_each_entry(vma, &obj->vma_list, obj_link) {
seq_printf(m, " (%sgtt offset: %08llx, size: %08llx",
i915_is_ggtt(vma->vm) ? "g" : "pp",
vma->node.start, vma->node.size);
@@ -229,7 +228,7 @@ static int i915_gem_object_list_info(struct seq_file *m, void *data)
}

total_obj_size = total_gtt_size = count = 0;
- list_for_each_entry(vma, head, mm_list) {
+ list_for_each_entry(vma, head, vm_link) {
seq_printf(m, " ");
describe_obj(m, vma->obj);
seq_printf(m, "\n");
@@ -341,7 +340,7 @@ static int per_file_stats(int id, void *ptr, void *data)
stats->shared += obj->base.size;

if (USES_FULL_PPGTT(obj->base.dev)) {
- list_for_each_entry(vma, &obj->vma_list, vma_link) {
+ list_for_each_entry(vma, &obj->vma_list, obj_link) {
struct i915_hw_ppgtt *ppgtt;

if (!drm_mm_node_allocated(&vma->node))
@@ -453,12 +452,12 @@ static int i915_gem_object_info(struct seq_file *m, void* data)
count, mappable_count, size, mappable_size);

size = count = mappable_size = mappable_count = 0;
- count_vmas(&vm->active_list, mm_list);
+ count_vmas(&vm->active_list, vm_link);
seq_printf(m, " %u [%u] active objects, %llu [%llu] bytes\n",
count, mappable_count, size, mappable_size);

size = count = mappable_size = mappable_count = 0;
- count_vmas(&vm->inactive_list, mm_list);
+ count_vmas(&vm->inactive_list, vm_link);
seq_printf(m, " %u [%u] inactive objects, %llu [%llu] bytes\n",
count, mappable_count, size, mappable_size);

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 4eef13ebdaf3..e4d7c7f5aca2 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -128,10 +128,10 @@ i915_gem_get_aperture_ioctl(struct drm_device *dev, void *data,

pinned = 0;
mutex_lock(&dev->struct_mutex);
- list_for_each_entry(vma, &ggtt->base.active_list, mm_list)
+ list_for_each_entry(vma, &ggtt->base.active_list, vm_link)
if (vma->pin_count)
pinned += vma->node.size;
- list_for_each_entry(vma, &ggtt->base.inactive_list, mm_list)
+ list_for_each_entry(vma, &ggtt->base.inactive_list, vm_link)
if (vma->pin_count)
pinned += vma->node.size;
mutex_unlock(&dev->struct_mutex);
@@ -261,7 +261,7 @@ drop_pages(struct drm_i915_gem_object *obj)
int ret;

drm_gem_object_reference(&obj->base);
- list_for_each_entry_safe(vma, next, &obj->vma_list, vma_link)
+ list_for_each_entry_safe(vma, next, &obj->vma_list, obj_link)
if (i915_vma_unbind(vma))
break;

@@ -2038,7 +2038,7 @@ void i915_vma_move_to_active(struct i915_vma *vma,
obj->active |= intel_engine_flag(engine);

i915_gem_request_mark_active(req, &obj->last_read[engine->id]);
- list_move_tail(&vma->mm_list, &vma->vm->active_list);
+ list_move_tail(&vma->vm_link, &vma->vm->active_list);
}

static void
@@ -2079,9 +2079,9 @@ i915_gem_object_retire__read(struct i915_gem_active *active,
*/
list_move_tail(&obj->global_list, &request->i915->mm.bound_list);

- list_for_each_entry(vma, &obj->vma_list, vma_link) {
- if (!list_empty(&vma->mm_list))
- list_move_tail(&vma->mm_list, &vma->vm->inactive_list);
+ list_for_each_entry(vma, &obj->vma_list, obj_link) {
+ if (!list_empty(&vma->vm_link))
+ list_move_tail(&vma->vm_link, &vma->vm->inactive_list);
}

drm_gem_object_unreference(&obj->base);
@@ -2576,7 +2576,7 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
int ret;

- if (list_empty(&vma->vma_link))
+ if (list_empty(&vma->obj_link))
return 0;

if (!drm_mm_node_allocated(&vma->node)) {
@@ -2610,7 +2610,7 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
vma->vm->unbind_vma(vma);
vma->bound = 0;

- list_del_init(&vma->mm_list);
+ list_del_init(&vma->vm_link);
if (i915_is_ggtt(vma->vm)) {
if (vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL) {
obj->map_and_fenceable = false;
@@ -2864,7 +2864,7 @@ search_free:
goto err_remove_node;

list_move_tail(&obj->global_list, &dev_priv->mm.bound_list);
- list_add_tail(&vma->mm_list, &vm->inactive_list);
+ list_add_tail(&vma->vm_link, &vm->inactive_list);

return vma;

@@ -3029,7 +3029,7 @@ i915_gem_object_set_to_gtt_domain(struct drm_i915_gem_object *obj, bool write)
/* And bump the LRU for this access */
vma = i915_gem_obj_to_ggtt(obj);
if (vma && drm_mm_node_allocated(&vma->node) && !obj->active)
- list_move_tail(&vma->mm_list,
+ list_move_tail(&vma->vm_link,
&to_i915(obj->base.dev)->gtt.base.inactive_list);

return 0;
@@ -3064,7 +3064,7 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
* catch the issue of the CS prefetch crossing page boundaries and
* reading an invalid PTE on older architectures.
*/
- list_for_each_entry_safe(vma, next, &obj->vma_list, vma_link) {
+ list_for_each_entry_safe(vma, next, &obj->vma_list, obj_link) {
if (!drm_mm_node_allocated(&vma->node))
continue;

@@ -3127,7 +3127,7 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
*/
}

- list_for_each_entry(vma, &obj->vma_list, vma_link) {
+ list_for_each_entry(vma, &obj->vma_list, obj_link) {
if (!drm_mm_node_allocated(&vma->node))
continue;

@@ -3137,7 +3137,7 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
}
}

- list_for_each_entry(vma, &obj->vma_list, vma_link)
+ list_for_each_entry(vma, &obj->vma_list, obj_link)
vma->node.color = cache_level;
obj->cache_level = cache_level;

@@ -3797,7 +3797,7 @@ void i915_gem_free_object(struct drm_gem_object *gem_obj)

trace_i915_gem_object_destroy(obj);

- list_for_each_entry_safe(vma, next, &obj->vma_list, vma_link) {
+ list_for_each_entry_safe(vma, next, &obj->vma_list, obj_link) {
int ret;

vma->pin_count = 0;
@@ -3854,7 +3854,7 @@ struct i915_vma *i915_gem_obj_to_vma(struct drm_i915_gem_object *obj,
struct i915_address_space *vm)
{
struct i915_vma *vma;
- list_for_each_entry(vma, &obj->vma_list, vma_link) {
+ list_for_each_entry(vma, &obj->vma_list, obj_link) {
if (vma->ggtt_view.type == I915_GGTT_VIEW_NORMAL &&
vma->vm == vm)
return vma;
@@ -3871,7 +3871,7 @@ struct i915_vma *i915_gem_obj_to_ggtt_view(struct drm_i915_gem_object *obj,
if (WARN_ONCE(!view, "no view specified"))
return ERR_PTR(-EINVAL);

- list_for_each_entry(vma, &obj->vma_list, vma_link)
+ list_for_each_entry(vma, &obj->vma_list, obj_link)
if (vma->vm == ggtt &&
i915_ggtt_view_equal(&vma->ggtt_view, view))
return vma;
@@ -3892,7 +3892,7 @@ void i915_gem_vma_destroy(struct i915_vma *vma)
if (!i915_is_ggtt(vm))
i915_ppgtt_put(i915_vm_to_ppgtt(vm));

- list_del(&vma->vma_link);
+ list_del(&vma->obj_link);

kmem_cache_free(to_i915(vma->obj->base.dev)->vmas, vma);
}
@@ -4444,7 +4444,7 @@ u64 i915_gem_obj_offset(struct drm_i915_gem_object *o,

WARN_ON(vm == &dev_priv->mm.aliasing_ppgtt->base);

- list_for_each_entry(vma, &o->vma_list, vma_link) {
+ list_for_each_entry(vma, &o->vma_list, obj_link) {
if (i915_is_ggtt(vma->vm) &&
vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL)
continue;
@@ -4463,7 +4463,7 @@ u64 i915_gem_obj_ggtt_offset_view(struct drm_i915_gem_object *o,
struct i915_address_space *ggtt = i915_obj_to_ggtt(o);
struct i915_vma *vma;

- list_for_each_entry(vma, &o->vma_list, vma_link)
+ list_for_each_entry(vma, &o->vma_list, obj_link)
if (vma->vm == ggtt &&
i915_ggtt_view_equal(&vma->ggtt_view, view))
return vma->node.start;
@@ -4477,7 +4477,7 @@ bool i915_gem_obj_bound(struct drm_i915_gem_object *o,
{
struct i915_vma *vma;

- list_for_each_entry(vma, &o->vma_list, vma_link) {
+ list_for_each_entry(vma, &o->vma_list, obj_link) {
if (i915_is_ggtt(vma->vm) &&
vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL)
continue;
@@ -4494,7 +4494,7 @@ bool i915_gem_obj_ggtt_bound_view(struct drm_i915_gem_object *o,
struct i915_address_space *ggtt = i915_obj_to_ggtt(o);
struct i915_vma *vma;

- list_for_each_entry(vma, &o->vma_list, vma_link)
+ list_for_each_entry(vma, &o->vma_list, obj_link)
if (vma->vm == ggtt &&
i915_ggtt_view_equal(&vma->ggtt_view, view) &&
drm_mm_node_allocated(&vma->node))
@@ -4507,7 +4507,7 @@ bool i915_gem_obj_bound_any(struct drm_i915_gem_object *o)
{
struct i915_vma *vma;

- list_for_each_entry(vma, &o->vma_list, vma_link)
+ list_for_each_entry(vma, &o->vma_list, obj_link)
if (drm_mm_node_allocated(&vma->node))
return true;

@@ -4524,7 +4524,7 @@ unsigned long i915_gem_obj_size(struct drm_i915_gem_object *o,

BUG_ON(list_empty(&o->vma_list));

- list_for_each_entry(vma, &o->vma_list, vma_link) {
+ list_for_each_entry(vma, &o->vma_list, obj_link) {
if (i915_is_ggtt(vma->vm) &&
vma->ggtt_view.type != I915_GGTT_VIEW_NORMAL)
continue;
@@ -4537,7 +4537,7 @@ unsigned long i915_gem_obj_size(struct drm_i915_gem_object *o,
bool i915_gem_obj_is_pinned(struct drm_i915_gem_object *obj)
{
struct i915_vma *vma;
- list_for_each_entry(vma, &obj->vma_list, vma_link)
+ list_for_each_entry(vma, &obj->vma_list, obj_link)
if (vma->pin_count > 0)
return true;

diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 72b0875a95a4..05b4e0e85f24 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -142,7 +142,7 @@ static void i915_gem_context_clean(struct intel_context *ctx)
return;

list_for_each_entry_safe(vma, next, &ppgtt->base.inactive_list,
- mm_list) {
+ vm_link) {
if (WARN_ON(__i915_vma_unbind_no_wait(vma)))
break;
}
diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
index 07c6e4d320c9..ea1f8d1bd228 100644
--- a/drivers/gpu/drm/i915/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/i915_gem_evict.c
@@ -116,7 +116,7 @@ i915_gem_evict_something(struct drm_device *dev, struct i915_address_space *vm,

search_again:
/* First see if there is a large enough contiguous idle region... */
- list_for_each_entry(vma, &vm->inactive_list, mm_list) {
+ list_for_each_entry(vma, &vm->inactive_list, vm_link) {
if (mark_free(vma, &unwind_list))
goto found;
}
@@ -125,7 +125,7 @@ search_again:
goto none;

/* Now merge in the soon-to-be-expired objects... */
- list_for_each_entry(vma, &vm->active_list, mm_list) {
+ list_for_each_entry(vma, &vm->active_list, vm_link) {
if (mark_free(vma, &unwind_list))
goto found;
}
@@ -270,7 +270,7 @@ int i915_gem_evict_vm(struct i915_address_space *vm, bool do_idle)
WARN_ON(!list_empty(&vm->active_list));
}

- list_for_each_entry_safe(vma, next, &vm->inactive_list, mm_list)
+ list_for_each_entry_safe(vma, next, &vm->inactive_list, vm_link)
if (vma->pin_count == 0)
WARN_ON(i915_vma_unbind(vma));

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index cddbd8c00663..6168182a87d8 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2736,7 +2736,7 @@ static int i915_gem_setup_global_gtt(struct drm_device *dev,
}
vma->bound |= GLOBAL_BIND;
__i915_vma_set_map_and_fenceable(vma);
- list_add_tail(&vma->mm_list, &ggtt_vm->inactive_list);
+ list_add_tail(&vma->vm_link, &ggtt_vm->inactive_list);
}

/* Clear any non-preallocated blocks */
@@ -3221,7 +3221,7 @@ void i915_gem_restore_gtt_mappings(struct drm_device *dev)
vm = &dev_priv->gtt.base;
list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
flush = false;
- list_for_each_entry(vma, &obj->vma_list, vma_link) {
+ list_for_each_entry(vma, &obj->vma_list, obj_link) {
if (vma->vm != vm)
continue;

@@ -3277,8 +3277,8 @@ __i915_gem_vma_create(struct drm_i915_gem_object *obj,
if (vma == NULL)
return ERR_PTR(-ENOMEM);

- INIT_LIST_HEAD(&vma->vma_link);
- INIT_LIST_HEAD(&vma->mm_list);
+ INIT_LIST_HEAD(&vma->vm_link);
+ INIT_LIST_HEAD(&vma->obj_link);
INIT_LIST_HEAD(&vma->exec_list);
vma->vm = vm;
vma->obj = obj;
@@ -3286,7 +3286,7 @@ __i915_gem_vma_create(struct drm_i915_gem_object *obj,
if (i915_is_ggtt(vm))
vma->ggtt_view = *ggtt_view;

- list_add_tail(&vma->vma_link, &obj->vma_list);
+ list_add_tail(&vma->obj_link, &obj->vma_list);
if (!i915_is_ggtt(vm))
i915_ppgtt_get(i915_vm_to_ppgtt(vm));

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index b448ad832dcf..2497671d1e1a 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -195,9 +195,9 @@ struct i915_vma {
struct i915_ggtt_view ggtt_view;

/** This object's place on the active/inactive lists */
- struct list_head mm_list;
+ struct list_head vm_link;

- struct list_head vma_link; /* Link in the object's VMA list */
+ struct list_head obj_link; /* Link in the object's VMA list */

/** This vma's place in the batchbuffer or on the eviction list */
struct list_head exec_list;
diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c
index 16da9c1422cc..777959b47ccf 100644
--- a/drivers/gpu/drm/i915/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c
@@ -52,7 +52,7 @@ static int num_vma_bound(struct drm_i915_gem_object *obj)
struct i915_vma *vma;
int count = 0;

- list_for_each_entry(vma, &obj->vma_list, vma_link) {
+ list_for_each_entry(vma, &obj->vma_list, obj_link) {
if (drm_mm_node_allocated(&vma->node))
count++;
if (vma->pin_count)
@@ -176,7 +176,7 @@ i915_gem_shrink(struct drm_i915_private *dev_priv,

/* For the unbound phase, this should be a no-op! */
list_for_each_entry_safe(vma, v,
- &obj->vma_list, vma_link)
+ &obj->vma_list, obj_link)
if (i915_vma_unbind(vma))
break;

diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
index c384dc9c8a63..590e635cb65c 100644
--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
@@ -692,7 +692,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,

vma->bound |= GLOBAL_BIND;
__i915_vma_set_map_and_fenceable(vma);
- list_add_tail(&vma->mm_list, &ggtt->inactive_list);
+ list_add_tail(&vma->vm_link, &ggtt->inactive_list);
}

list_add_tail(&obj->global_list, &dev_priv->mm.bound_list);
diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index 251e81c4b0ea..2f3638d02bdd 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -81,7 +81,7 @@ static void __cancel_userptr__worker(struct work_struct *work)
was_interruptible = dev_priv->mm.interruptible;
dev_priv->mm.interruptible = false;

- list_for_each_entry_safe(vma, tmp, &obj->vma_list, vma_link)
+ list_for_each_entry_safe(vma, tmp, &obj->vma_list, obj_link)
WARN_ON(i915_vma_unbind(vma));
WARN_ON(i915_gem_object_put_pages(obj));

diff --git a/drivers/gpu/drm/i915/i915_gpu_error.c b/drivers/gpu/drm/i915/i915_gpu_error.c
index c812079bc25c..706d956b6eb3 100644
--- a/drivers/gpu/drm/i915/i915_gpu_error.c
+++ b/drivers/gpu/drm/i915/i915_gpu_error.c
@@ -731,7 +731,7 @@ static u32 capture_active_bo(struct drm_i915_error_buffer *err,
struct i915_vma *vma;
int i = 0;

- list_for_each_entry(vma, head, mm_list) {
+ list_for_each_entry(vma, head, vm_link) {
capture_bo(err++, vma);
if (++i == count)
break;
@@ -754,7 +754,7 @@ static u32 capture_pinned_bo(struct drm_i915_error_buffer *err,
if (err == last)
break;

- list_for_each_entry(vma, &obj->vma_list, vma_link)
+ list_for_each_entry(vma, &obj->vma_list, obj_link)
if (vma->vm == vm && vma->pin_count > 0)
capture_bo(err++, vma);
}
@@ -1113,12 +1113,12 @@ static void i915_gem_capture_vm(struct drm_i915_private *dev_priv,
int i;

i = 0;
- list_for_each_entry(vma, &vm->active_list, mm_list)
+ list_for_each_entry(vma, &vm->active_list, vm_link)
i++;
error->active_bo_count[ndx] = i;

list_for_each_entry(obj, &dev_priv->mm.bound_list, global_list) {
- list_for_each_entry(vma, &obj->vma_list, vma_link)
+ list_for_each_entry(vma, &obj->vma_list, obj_link)
if (vma->vm == vm && vma->pin_count > 0)
i++;
}
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:35 UTC
Permalink
Hook the vma itself into the i915_gem_request_retire() so that we can
accurately track when a solitary vma is inactive (as opposed to having
to wait for the entire object to be idle). This improves the interaction
when using multiple contexts (with full-ppgtt) and eliminates some
frequent list walking when retiring objects after a completed request.

A side-effect is that we get an active vma reference for free. The
consequence of this is shown in the next patch...

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 2 +-
drivers/gpu/drm/i915/i915_gem.c | 15 +++++----------
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 10 +++++++++-
drivers/gpu/drm/i915/i915_gem_gtt.c | 21 +++++++++++++++++++++
drivers/gpu/drm/i915/i915_gem_gtt.h | 5 +++++
5 files changed, 41 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index e2b1242e369b..378bc73296aa 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -357,7 +357,7 @@ static int per_file_stats(int id, void *ptr, void *data)
continue;
}

- if (obj->active) /* XXX per-vma statistic */
+ if (vma->active)
stats->active += vma->node.size;
else
stats->inactive += vma->node.size;
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 95e69dc47fc8..7e4f7f2d18e4 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2070,7 +2070,6 @@ i915_gem_object_retire__read(struct i915_gem_active *active,
int ring = request->engine->id;
struct drm_i915_gem_object *obj =
container_of(active, struct drm_i915_gem_object, last_read[ring]);
- struct i915_vma *vma;

GEM_BUG_ON((obj->active & (1 << ring)) == 0);

@@ -2082,12 +2081,9 @@ i915_gem_object_retire__read(struct i915_gem_active *active,
* so that we don't steal from recently used but inactive objects
* (unless we are forced to ofc!)
*/
- list_move_tail(&obj->global_list, &request->i915->mm.bound_list);
-
- list_for_each_entry(vma, &obj->vma_list, obj_link) {
- if (!list_empty(&vma->vm_link))
- list_move_tail(&vma->vm_link, &vma->vm->inactive_list);
- }
+ if (obj->bind_count)
+ list_move_tail(&obj->global_list,
+ &request->i915->mm.bound_list);

drm_gem_object_unreference(&obj->base);
}
@@ -3034,9 +3030,8 @@ i915_gem_object_set_to_gtt_domain(struct drm_i915_gem_object *obj, bool write)

/* And bump the LRU for this access */
vma = i915_gem_obj_to_ggtt(obj);
- if (vma && drm_mm_node_allocated(&vma->node) && !obj->active)
- list_move_tail(&vma->vm_link,
- &to_i915(obj->base.dev)->gtt.base.inactive_list);
+ if (vma && drm_mm_node_allocated(&vma->node) && !vma->active)
+ list_move_tail(&vma->vm_link, &vma->vm->inactive_list);

return 0;
}
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 9e549bded186..19d32f22f85d 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -1115,7 +1115,13 @@ void i915_vma_move_to_active(struct i915_vma *vma,

obj->dirty = 1; /* be paranoid */

- /* Add a reference if we're newly entering the active list. */
+ /* Add a reference if we're newly entering the active list.
+ * The order in which we add operations to the retirement queue is
+ * vital here: mark_active adds to the start of the callback list,
+ * such that subsequent callbacks are called first. Therefore we
+ * add the active reference first and queue for it to be dropped
+ * *last*.
+ */
if (obj->active == 0)
drm_gem_object_reference(&obj->base);
obj->active |= 1 << engine;
@@ -1139,6 +1145,8 @@ void i915_vma_move_to_active(struct i915_vma *vma,
}
}

+ vma->active |= 1 << engine;
+ i915_gem_request_mark_active(req, &vma->last_read[engine]);
list_move_tail(&vma->vm_link, &vma->vm->active_list);
}

diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 3a07ff622bd6..fd42b6491d28 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -3241,12 +3241,31 @@ void i915_gem_restore_gtt_mappings(struct drm_device *dev)
i915_ggtt_flush(dev_priv);
}

+static void
+i915_vma_retire(struct i915_gem_active *active,
+ struct drm_i915_gem_request *rq)
+{
+ const unsigned engine = rq->engine->id;
+ struct i915_vma *vma =
+ container_of(active, struct i915_vma, last_read[engine]);
+
+ GEM_BUG_ON((vma->active & (1 << engine)) == 0);
+ GEM_BUG_ON((vma->obj->active & vma->active) != vma->active);
+
+ vma->active &= ~(1 << engine);
+ if (vma->active)
+ return;
+
+ list_move_tail(&vma->vm_link, &vma->vm->inactive_list);
+}
+
static struct i915_vma *
__i915_gem_vma_create(struct drm_i915_gem_object *obj,
struct i915_address_space *vm,
const struct i915_ggtt_view *ggtt_view)
{
struct i915_vma *vma;
+ int i;

if (WARN_ON(i915_is_ggtt(vm) != !!ggtt_view))
return ERR_PTR(-EINVAL);
@@ -3258,6 +3277,8 @@ __i915_gem_vma_create(struct drm_i915_gem_object *obj,
INIT_LIST_HEAD(&vma->vm_link);
INIT_LIST_HEAD(&vma->obj_link);
INIT_LIST_HEAD(&vma->exec_list);
+ for (i = 0; i < ARRAY_SIZE(vma->last_read); i++)
+ init_request_active(&vma->last_read[i], i915_vma_retire);
vma->vm = vm;
vma->obj = obj;
vma->is_ggtt = i915_is_ggtt(vm);
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 9d3984602d34..0a7867fa5a1f 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -34,6 +34,8 @@
#ifndef __I915_GEM_GTT_H__
#define __I915_GEM_GTT_H__

+#include "i915_gem_request.h"
+
struct drm_i915_file_private;

typedef uint32_t gen6_pte_t;
@@ -180,10 +182,13 @@ struct i915_vma {
struct drm_i915_gem_object *obj;
struct i915_address_space *vm;

+ struct i915_gem_active last_read[I915_NUM_RINGS];
+
/** Flags and address space this VMA is bound to */
#define GLOBAL_BIND (1<<0)
#define LOCAL_BIND (1<<1)
unsigned int bound : 4;
+ unsigned int active : I915_NUM_RINGS;
bool is_ggtt : 1;

/**
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:17:33 UTC
Permalink
Since we may have VMA allocated for an object, but we interrupted their
binding, there is a disparity between have elements on the obj->vma_list
and being bound. i915_gem_obj_bound_any() does this check, but this is
not rigorously observed - add an explicit count to make it easier.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 12 +++++------
drivers/gpu/drm/i915/i915_drv.h | 3 ++-
drivers/gpu/drm/i915/i915_gem.c | 34 +++++++++++++-------------------
drivers/gpu/drm/i915/i915_gem_shrinker.c | 17 +---------------
drivers/gpu/drm/i915/i915_gem_stolen.c | 1 +
5 files changed, 23 insertions(+), 44 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 0d1f470567b0..e2b1242e369b 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -164,6 +164,9 @@ describe_obj(struct seq_file *m, struct drm_i915_gem_object *obj)
if (obj->fence_reg != I915_FENCE_REG_NONE)
seq_printf(m, " (fence: %d)", obj->fence_reg);
list_for_each_entry(vma, &obj->vma_list, obj_link) {
+ if (!drm_mm_node_allocated(&vma->node))
+ continue;
+
seq_printf(m, " (%sgtt offset: %08llx, size: %08llx",
vma->is_ggtt ? "g" : "pp",
vma->node.start, vma->node.size);
@@ -331,11 +334,11 @@ static int per_file_stats(int id, void *ptr, void *data)
struct drm_i915_gem_object *obj = ptr;
struct file_stats *stats = data;
struct i915_vma *vma;
- int bound = 0;

stats->count++;
stats->total += obj->base.size;
-
+ if (!obj->bind_count)
+ stats->unbound += obj->base.size;
if (obj->base.name || obj->base.dma_buf)
stats->shared += obj->base.size;

@@ -343,8 +346,6 @@ static int per_file_stats(int id, void *ptr, void *data)
if (!drm_mm_node_allocated(&vma->node))
continue;

- bound++;
-
if (vma->is_ggtt) {
stats->global += vma->node.size;
} else {
@@ -362,9 +363,6 @@ static int per_file_stats(int id, void *ptr, void *data)
stats->inactive += vma->node.size;
}

- if (!bound)
- stats->unbound += obj->base.size;
-
return 0;
}

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index aa9d3782107e..8f5cf244094e 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2088,6 +2088,8 @@ struct drm_i915_gem_object {

unsigned int frontbuffer_bits:INTEL_FRONTBUFFER_BITS;

+ /** Count of VMA actually bound by this object */
+ unsigned int bind_count;
unsigned int pin_display;

struct sg_table *pages;
@@ -2874,7 +2876,6 @@ i915_gem_obj_ggtt_offset(struct drm_i915_gem_object *o)
return i915_gem_obj_ggtt_offset_view(o, &i915_ggtt_view_normal);
}

-bool i915_gem_obj_bound_any(struct drm_i915_gem_object *o);
bool i915_gem_obj_ggtt_bound_view(struct drm_i915_gem_object *o,
const struct i915_ggtt_view *view);
bool i915_gem_obj_bound(struct drm_i915_gem_object *o,
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 164ebdaa0369..ed3f306af42f 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1812,7 +1812,7 @@ i915_gem_object_put_pages(struct drm_i915_gem_object *obj)
if (obj->pages_pin_count)
return -EBUSY;

- BUG_ON(i915_gem_obj_bound_any(obj));
+ BUG_ON(obj->bind_count);

/* ->put_pages might need to allocate memory for the bit17 swizzle
* array, hence protect them from being reaped by removing them from gtt
@@ -2558,7 +2558,6 @@ static void i915_gem_object_finish_gtt(struct drm_i915_gem_object *obj)
static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
{
struct drm_i915_gem_object *obj = vma->obj;
- struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
int ret;

if (list_empty(&vma->obj_link))
@@ -2572,7 +2571,8 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
if (vma->pin_count)
return -EBUSY;

- BUG_ON(obj->pages == NULL);
+ GEM_BUG_ON(obj->bind_count == 0);
+ GEM_BUG_ON(obj->pages == NULL);

if (wait) {
ret = i915_gem_object_wait_rendering(obj, false);
@@ -2610,8 +2610,9 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait)

/* Since the unbound list is global, only move to that list if
* no more VMAs exist. */
- if (list_empty(&obj->vma_list))
- list_move_tail(&obj->global_list, &dev_priv->mm.unbound_list);
+ if (--obj->bind_count == 0)
+ list_move_tail(&obj->global_list,
+ &to_i915(obj->base.dev)->mm.unbound_list);

/* And finally now the object is completely decoupled from this vma,
* we can drop its hold on the backing storage and allow it to be
@@ -2849,6 +2850,7 @@ search_free:

list_move_tail(&obj->global_list, &dev_priv->mm.bound_list);
list_add_tail(&vma->vm_link, &vm->inactive_list);
+ obj->bind_count++;

return vma;

@@ -3037,7 +3039,6 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
{
struct drm_device *dev = obj->base.dev;
struct i915_vma *vma, *next;
- bool bound = false;
int ret = 0;

if (obj->cache_level == cache_level)
@@ -3061,8 +3062,7 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
ret = i915_vma_unbind(vma);
if (ret)
return ret;
- } else
- bound = true;
+ }
}

/* We can reuse the existing drm_mm nodes but need to change the
@@ -3072,7 +3072,7 @@ int i915_gem_object_set_cache_level(struct drm_i915_gem_object *obj,
* rewrite the PTE in the belief that doing so tramples upon less
* state and so involves less work.
*/
- if (bound) {
+ if (obj->bind_count) {
/* Before we change the PTE, the GPU must not be accessing it.
* If we wait upon the object, we know that all the bound
* VMA are no longer active.
@@ -3281,6 +3281,9 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
old_read_domains,
old_write_domain);

+ /* Increment the pages_pin_count to guard against the shrinker */
+ obj->pages_pin_count++;
+
return 0;

err_unpin_display:
@@ -3297,6 +3300,7 @@ i915_gem_object_unpin_from_display_plane(struct drm_i915_gem_object *obj,

i915_gem_object_ggtt_unpin_view(obj, view);

+ obj->pages_pin_count--;
obj->pin_display--;
}

@@ -3797,6 +3801,7 @@ void i915_gem_free_object(struct drm_gem_object *gem_obj)
dev_priv->mm.interruptible = was_interruptible;
}
}
+ GEM_BUG_ON(obj->bind_count);

/* Stolen objects don't hold a ref, but do hold pin count. Fix that up
* before progressing. */
@@ -4485,17 +4490,6 @@ bool i915_gem_obj_ggtt_bound_view(struct drm_i915_gem_object *o,
return false;
}

-bool i915_gem_obj_bound_any(struct drm_i915_gem_object *o)
-{
- struct i915_vma *vma;
-
- list_for_each_entry(vma, &o->vma_list, obj_link)
- if (drm_mm_node_allocated(&vma->node))
- return true;
-
- return false;
-}
-
unsigned long i915_gem_obj_size(struct drm_i915_gem_object *o,
struct i915_address_space *vm)
{
diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c
index 777959b47ccf..fa190ef3f727 100644
--- a/drivers/gpu/drm/i915/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c
@@ -47,21 +47,6 @@ static bool mutex_is_locked_by(struct mutex *mutex, struct task_struct *task)
#endif
}

-static int num_vma_bound(struct drm_i915_gem_object *obj)
-{
- struct i915_vma *vma;
- int count = 0;
-
- list_for_each_entry(vma, &obj->vma_list, obj_link) {
- if (drm_mm_node_allocated(&vma->node))
- count++;
- if (vma->pin_count)
- count++;
- }
-
- return count;
-}
-
static bool swap_available(void)
{
return get_nr_swap_pages() > 0;
@@ -77,7 +62,7 @@ static bool can_release_pages(struct drm_i915_gem_object *obj)
* to the GPU, simply unbinding from the GPU is not going to succeed
* in releasing our pin count on the pages themselves.
*/
- if (obj->pages_pin_count != num_vma_bound(obj))
+ if (obj->pages_pin_count != obj->bind_count)
return false;

/* We can only return physical pages to the system if we can either
diff --git a/drivers/gpu/drm/i915/i915_gem_stolen.c b/drivers/gpu/drm/i915/i915_gem_stolen.c
index 463be259a505..1c81a1470baf 100644
--- a/drivers/gpu/drm/i915/i915_gem_stolen.c
+++ b/drivers/gpu/drm/i915/i915_gem_stolen.c
@@ -693,6 +693,7 @@ i915_gem_object_create_stolen_for_preallocated(struct drm_device *dev,
vma->bound |= GLOBAL_BIND;
__i915_vma_set_map_and_fenceable(vma);
list_add_tail(&vma->vm_link, &ggtt->inactive_list);
+ obj->bind_count++;

list_add_tail(&obj->global_list, &dev_priv->mm.bound_list);
i915_gem_object_pin_pages(obj);
--
2.7.0.rc3
Chris Wilson
2016-01-11 09:16:39 UTC
Permalink
After the GPU reset and we discard all of the incomplete requests, mark
the GPU as having advanced to the last_submitted_seqno (as having
completed the requests and ready for fresh work). The impact of this is
negligble, as all the requests will be considered completed by this
point, it just brings the HWS into line with expectations for external
viewers.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index b956b8813307..a713e8a6cb36 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2818,6 +2818,8 @@ static void i915_gem_reset_ring_cleanup(struct drm_i915_private *dev_priv,
buffer->last_retired_head = buffer->tail;
intel_ring_update_space(buffer);
}
+
+ intel_ring_init_seqno(ring, ring->last_submitted_seqno);
}

void i915_gem_reset(struct drm_device *dev)
--
2.7.0.rc3
Chris Wilson
2016-01-11 10:44:31 UTC
Permalink
This reverts commit e9f24d5fb7cf3628b195b18ff3ac4e37937ceeae.

The patch was only a stop-gap measure that fixed half the problem - the
leak of the fbcon when restarting X. A complete solution required
releasing the VMA when the object itself was closed rather than rely on
file/process exit. The previous patches add the VMA tracking necessary
to do close them along with the object, context or file, and so the time
has come to remove the partial fix.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_drv.h | 5 -----
drivers/gpu/drm/i915/i915_gem.c | 14 ++------------
drivers/gpu/drm/i915/i915_gem_context.c | 22 ----------------------
3 files changed, 2 insertions(+), 39 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index fc35a9b8d910..4e912fd3b8c6 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2704,11 +2704,6 @@ int i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
u32 flags);
void __i915_vma_set_map_and_fenceable(struct i915_vma *vma);
int __must_check i915_vma_unbind(struct i915_vma *vma);
-/*
- * BEWARE: Do not use the function below unless you can _absolutely_
- * _guarantee_ VMA in question is _not in use_ anywhere.
- */
-int __must_check __i915_vma_unbind_no_wait(struct i915_vma *vma);
void i915_vma_close(struct i915_vma *vma);

int i915_gem_object_unbind(struct drm_i915_gem_object *obj);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 16ee3bd7010e..391f840d29b7 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2608,7 +2608,7 @@ static void i915_vma_destroy(struct i915_vma *vma)
kmem_cache_free(to_i915(vma->obj->base.dev)->vmas, vma);
}

-static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
+int i915_vma_unbind(struct i915_vma *vma)
{
struct drm_i915_gem_object *obj = vma->obj;
int ret, i;
@@ -2616,7 +2616,7 @@ static int __i915_vma_unbind(struct i915_vma *vma, bool wait)
/* First wait upon any activity as retiring the request may
* have side-effects such as unpinning or even unbinding this vma.
*/
- if (vma->active && wait) {
+ if (vma->active) {
bool was_closed;

/* When a closed VMA is retired, it is unbound - eek. */
@@ -2692,16 +2692,6 @@ destroy:
return 0;
}

-int i915_vma_unbind(struct i915_vma *vma)
-{
- return __i915_vma_unbind(vma, true);
-}
-
-int __i915_vma_unbind_no_wait(struct i915_vma *vma)
-{
- return __i915_vma_unbind(vma, false);
-}
-
int i915_gpu_idle(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 4583d8fe3585..e0ecfdfb0c8c 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -133,21 +133,6 @@ static int get_context_size(struct drm_device *dev)
return ret;
}

-static void i915_gem_context_clean(struct intel_context *ctx)
-{
- struct i915_hw_ppgtt *ppgtt = ctx->ppgtt;
- struct i915_vma *vma, *next;
-
- if (!ppgtt)
- return;
-
- list_for_each_entry_safe(vma, next, &ppgtt->base.inactive_list,
- vm_link) {
- if (WARN_ON(__i915_vma_unbind_no_wait(vma)))
- break;
- }
-}
-
void i915_gem_context_free(struct kref *ctx_ref)
{
struct intel_context *ctx = container_of(ctx_ref, typeof(*ctx), ref);
@@ -158,13 +143,6 @@ void i915_gem_context_free(struct kref *ctx_ref)
if (i915.enable_execlists)
intel_lr_context_free(ctx);

- /*
- * This context is going away and we need to remove all VMAs still
- * around. This is to handle imported shared objects for which
- * destructor did not run when their handles were closed.
- */
- i915_gem_context_clean(ctx);
-
i915_ppgtt_put(ctx->ppgtt);

if (ctx->legacy_hw_ctx.rcs_state)
--
2.7.0.rc3
Chris Wilson
2016-01-11 10:44:43 UTC
Permalink
Only queue a CS flip if the outstanding request is not complete, and in
particular do not rely on the request tracking being fresh (since it is
only updated when requests are retired).

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/intel_display.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index eef858d5376f..f227cdaf38ec 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -11309,8 +11309,11 @@ static bool use_mmio_flip(struct intel_engine_cs *ring,
!reservation_object_test_signaled_rcu(obj->base.dma_buf->resv,
false))
return true;
+ else if (!obj->last_write.request ||
+ i915_gem_request_completed(obj->last_write.request))
+ return true;
else
- return ring != i915_gem_request_get_engine(obj->last_write.request);
+ return ring != obj->last_write.request->engine;
}

static void skl_do_mmio_flip(struct intel_crtc *intel_crtc,
--
2.7.0.rc3
Chris Wilson
2016-01-11 10:44:32 UTC
Permalink
[ 196.988204] clocksource: timekeeping watchdog: Marking clocksource 'tsc' as unstable because the skew is too large:
[ 196.988512] clocksource: 'refined-jiffies' wd_now: ffff9b48 wd_last: ffff9acb mask: ffffffff
[ 196.988559] clocksource: 'tsc' cs_now: 4fcfa84354 cs_last: 4f95425e98 mask: ffffffffffffffff
[ 196.992115] clocksource: Switched to clocksource refined-jiffies

Followed by a hard lockup.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 5 +-
drivers/gpu/drm/i915/i915_gem.c | 15 +--
drivers/gpu/drm/i915/i915_irq.c | 2 +-
drivers/gpu/drm/i915/intel_lrc.c | 164 +++++++++++++++++---------------
drivers/gpu/drm/i915/intel_lrc.h | 3 +-
drivers/gpu/drm/i915/intel_ringbuffer.h | 1 +
6 files changed, 98 insertions(+), 92 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 378bc73296aa..15a6fddfb79b 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -2094,7 +2094,6 @@ static int i915_execlists(struct seq_file *m, void *data)
for_each_ring(ring, dev_priv, ring_id) {
struct drm_i915_gem_request *head_req = NULL;
int count = 0;
- unsigned long flags;

seq_printf(m, "%s\n", ring->name);

@@ -2121,12 +2120,12 @@ static int i915_execlists(struct seq_file *m, void *data)
i, status, ctx_id);
}

- spin_lock_irqsave(&ring->execlist_lock, flags);
+ spin_lock(&ring->execlist_lock);
list_for_each(cursor, &ring->execlist_queue)
count++;
head_req = list_first_entry_or_null(&ring->execlist_queue,
struct drm_i915_gem_request, execlist_link);
- spin_unlock_irqrestore(&ring->execlist_lock, flags);
+ spin_unlock(&ring->execlist_lock);

seq_printf(m, "\t%d requests in queue\n", count);
if (head_req) {
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 391f840d29b7..eb875ecd7907 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2192,13 +2192,13 @@ static void i915_gem_reset_ring_cleanup(struct intel_engine_cs *engine)
*/

if (i915.enable_execlists) {
- spin_lock_irq(&engine->execlist_lock);
+ spin_lock(&engine->execlist_lock);

/* list_splice_tail_init checks for empty lists */
list_splice_tail_init(&engine->execlist_queue,
&engine->execlist_retired_req_list);

- spin_unlock_irq(&engine->execlist_lock);
+ spin_unlock(&engine->execlist_lock);
intel_execlists_retire_requests(engine);
}

@@ -2290,15 +2290,8 @@ i915_gem_retire_requests(struct drm_device *dev)
for_each_ring(ring, dev_priv, i) {
i915_gem_retire_requests_ring(ring);
idle &= list_empty(&ring->request_list);
- if (i915.enable_execlists) {
- unsigned long flags;
-
- spin_lock_irqsave(&ring->execlist_lock, flags);
- idle &= list_empty(&ring->execlist_queue);
- spin_unlock_irqrestore(&ring->execlist_lock, flags);
-
- intel_execlists_retire_requests(ring);
- }
+ if (i915.enable_execlists)
+ idle &= intel_execlists_retire_requests(ring);
}

if (idle)
diff --git a/drivers/gpu/drm/i915/i915_irq.c b/drivers/gpu/drm/i915/i915_irq.c
index ce047ac84f5f..b2ef2d0c211b 100644
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -1316,7 +1316,7 @@ gen8_cs_irq_handler(struct intel_engine_cs *ring, u32 iir, int test_shift)
if (iir & (GT_RENDER_USER_INTERRUPT << test_shift))
notify_ring(ring);
if (iir & (GT_CONTEXT_SWITCH_INTERRUPT << test_shift))
- intel_lrc_irq_handler(ring);
+ wake_up_process(ring->execlists_submit);
}

static irqreturn_t gen8_gt_irq_handler(struct drm_i915_private *dev_priv,
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index b5f62b5f4913..de5889e95d6d 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -132,6 +132,8 @@
*
*/

+#include <linux/kthread.h>
+
#include <drm/drmP.h>
#include <drm/i915_drm.h>
#include "i915_drv.h"
@@ -341,7 +343,7 @@ static void execlists_elsp_write(struct drm_i915_gem_request *rq0,
rq0->elsp_submitted++;

/* You must always write both descriptors in the order below. */
- spin_lock(&dev_priv->uncore.lock);
+ spin_lock_irq(&dev_priv->uncore.lock);
intel_uncore_forcewake_get__locked(dev_priv, FORCEWAKE_ALL);
I915_WRITE_FW(RING_ELSP(engine), upper_32_bits(desc[1]));
I915_WRITE_FW(RING_ELSP(engine), lower_32_bits(desc[1]));
@@ -353,7 +355,7 @@ static void execlists_elsp_write(struct drm_i915_gem_request *rq0,
/* ELSP is a wo register, use another nearby reg for posting */
POSTING_READ_FW(RING_EXECLIST_STATUS_LO(engine));
intel_uncore_forcewake_put__locked(dev_priv, FORCEWAKE_ALL);
- spin_unlock(&dev_priv->uncore.lock);
+ spin_unlock_irq(&dev_priv->uncore.lock);
}

static int execlists_update_context(struct drm_i915_gem_request *rq)
@@ -492,89 +494,84 @@ static bool execlists_check_remove_request(struct intel_engine_cs *ring,
return false;
}

-static void get_context_status(struct intel_engine_cs *ring,
- u8 read_pointer,
- u32 *status, u32 *context_id)
+static void set_rtpriority(void)
{
- struct drm_i915_private *dev_priv = ring->dev->dev_private;
-
- if (WARN_ON(read_pointer >= GEN8_CSB_ENTRIES))
- return;
-
- *status = I915_READ(RING_CONTEXT_STATUS_BUF_LO(ring, read_pointer));
- *context_id = I915_READ(RING_CONTEXT_STATUS_BUF_HI(ring, read_pointer));
+ struct sched_param param = { .sched_priority = MAX_USER_RT_PRIO/2-1 };
+ sched_setscheduler_nocheck(current, SCHED_FIFO, &param);
}

-/**
- * intel_lrc_irq_handler() - handle Context Switch interrupts
- * @ring: Engine Command Streamer to handle.
- *
- * Check the unread Context Status Buffers and manage the submission of new
- * contexts to the ELSP accordingly.
- */
-void intel_lrc_irq_handler(struct intel_engine_cs *ring)
+static int intel_execlists_submit(void *arg)
{
- struct drm_i915_private *dev_priv = ring->dev->dev_private;
- u32 status_pointer;
- u8 read_pointer;
- u8 write_pointer;
- u32 status = 0;
- u32 status_id;
- u32 submit_contexts = 0;
+ struct intel_engine_cs *ring = arg;
+ struct drm_i915_private *dev_priv = ring->i915;

- status_pointer = I915_READ(RING_CONTEXT_STATUS_PTR(ring));
+ set_rtpriority();

- read_pointer = ring->next_context_status_buffer;
- write_pointer = GEN8_CSB_WRITE_PTR(status_pointer);
- if (read_pointer > write_pointer)
- write_pointer += GEN8_CSB_ENTRIES;
+ do {
+ u32 status;
+ u32 status_id;
+ u32 submit_contexts;
+ u8 head, tail;

- spin_lock(&ring->execlist_lock);
+ set_current_state(TASK_INTERRUPTIBLE);
+ head = ring->next_context_status_buffer;
+ tail = I915_READ(RING_CONTEXT_STATUS_PTR(ring)) & GEN8_CSB_PTR_MASK;
+ if (head == tail) {
+ if (kthread_should_stop())
+ return 0;

- while (read_pointer < write_pointer) {
+ schedule();
+ continue;
+ }
+ __set_current_state(TASK_RUNNING);

- get_context_status(ring, ++read_pointer % GEN8_CSB_ENTRIES,
- &status, &status_id);
+ if (head > tail)
+ tail += GEN8_CSB_ENTRIES;

- if (status & GEN8_CTX_STATUS_IDLE_ACTIVE)
- continue;
+ status = 0;
+ submit_contexts = 0;

- if (status & GEN8_CTX_STATUS_PREEMPTED) {
- if (status & GEN8_CTX_STATUS_LITE_RESTORE) {
- if (execlists_check_remove_request(ring, status_id))
- WARN(1, "Lite Restored request removed from queue\n");
- } else
- WARN(1, "Preemption without Lite Restore\n");
- }
+ spin_lock(&ring->execlist_lock);

- if ((status & GEN8_CTX_STATUS_ACTIVE_IDLE) ||
- (status & GEN8_CTX_STATUS_ELEMENT_SWITCH)) {
- if (execlists_check_remove_request(ring, status_id))
- submit_contexts++;
- }
- }
+ while (head++ < tail) {
+ status = I915_READ(RING_CONTEXT_STATUS_BUF_LO(ring, head % GEN8_CSB_ENTRIES));
+ status_id = I915_READ(RING_CONTEXT_STATUS_BUF_HI(ring, head % GEN8_CSB_ENTRIES));

- if (disable_lite_restore_wa(ring)) {
- /* Prevent a ctx to preempt itself */
- if ((status & GEN8_CTX_STATUS_ACTIVE_IDLE) &&
- (submit_contexts != 0))
- execlists_context_unqueue(ring);
- } else if (submit_contexts != 0) {
- execlists_context_unqueue(ring);
- }
+ if (status & GEN8_CTX_STATUS_IDLE_ACTIVE)
+ continue;

- spin_unlock(&ring->execlist_lock);
+ if (status & GEN8_CTX_STATUS_PREEMPTED) {
+ if (status & GEN8_CTX_STATUS_LITE_RESTORE) {
+ if (execlists_check_remove_request(ring, status_id))
+ WARN(1, "Lite Restored request removed from queue\n");
+ } else
+ WARN(1, "Preemption without Lite Restore\n");
+ }

- if (unlikely(submit_contexts > 2))
- DRM_ERROR("More than two context complete events?\n");
+ if ((status & GEN8_CTX_STATUS_ACTIVE_IDLE) ||
+ (status & GEN8_CTX_STATUS_ELEMENT_SWITCH)) {
+ if (execlists_check_remove_request(ring, status_id))
+ submit_contexts++;
+ }
+ }
+
+ if (disable_lite_restore_wa(ring)) {
+ /* Prevent a ctx to preempt itself */
+ if ((status & GEN8_CTX_STATUS_ACTIVE_IDLE) &&
+ (submit_contexts != 0))
+ execlists_context_unqueue(ring);
+ } else if (submit_contexts != 0) {
+ execlists_context_unqueue(ring);
+ }

- ring->next_context_status_buffer = write_pointer % GEN8_CSB_ENTRIES;
+ spin_unlock(&ring->execlist_lock);

- /* Update the read pointer to the old write pointer. Manual ringbuffer
- * management ftw </sarcasm> */
- I915_WRITE(RING_CONTEXT_STATUS_PTR(ring),
- _MASKED_FIELD(GEN8_CSB_READ_PTR_MASK,
- ring->next_context_status_buffer << 8));
+ WARN(submit_contexts > 2, "More than two context complete events?\n");
+ ring->next_context_status_buffer = tail % GEN8_CSB_ENTRIES;
+ I915_WRITE(RING_CONTEXT_STATUS_PTR(ring),
+ _MASKED_FIELD(GEN8_CSB_PTR_MASK << 8,
+ ring->next_context_status_buffer<<8));
+ } while (1);
}

static int execlists_context_queue(struct drm_i915_gem_request *request)
@@ -585,7 +582,7 @@ static int execlists_context_queue(struct drm_i915_gem_request *request)

i915_gem_request_get(request);

- spin_lock_irq(&engine->execlist_lock);
+ spin_lock(&engine->execlist_lock);

list_for_each_entry(cursor, &engine->execlist_queue, execlist_link)
if (++num_elements > 2)
@@ -611,7 +608,7 @@ static int execlists_context_queue(struct drm_i915_gem_request *request)
if (num_elements == 0)
execlists_context_unqueue(engine);

- spin_unlock_irq(&engine->execlist_lock);
+ spin_unlock(&engine->execlist_lock);

return 0;
}
@@ -667,19 +664,19 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
execlists_context_queue(request);
}

-void intel_execlists_retire_requests(struct intel_engine_cs *ring)
+bool intel_execlists_retire_requests(struct intel_engine_cs *ring)
{
struct drm_i915_gem_request *req, *tmp;
struct list_head retired_list;

WARN_ON(!mutex_is_locked(&ring->dev->struct_mutex));
if (list_empty(&ring->execlist_retired_req_list))
- return;
+ goto out;

INIT_LIST_HEAD(&retired_list);
- spin_lock_irq(&ring->execlist_lock);
+ spin_lock(&ring->execlist_lock);
list_replace_init(&ring->execlist_retired_req_list, &retired_list);
- spin_unlock_irq(&ring->execlist_lock);
+ spin_unlock(&ring->execlist_lock);

list_for_each_entry_safe(req, tmp, &retired_list, execlist_link) {
struct intel_context *ctx = req->ctx;
@@ -691,6 +688,9 @@ void intel_execlists_retire_requests(struct intel_engine_cs *ring)
list_del(&req->execlist_link);
i915_gem_request_put(req);
}
+
+out:
+ return list_empty(&ring->execlist_queue);
}

void intel_logical_ring_stop(struct intel_engine_cs *ring)
@@ -1525,6 +1525,9 @@ void intel_logical_ring_cleanup(struct intel_engine_cs *ring)
if (!intel_engine_initialized(ring))
return;

+ if (ring->execlists_submit)
+ kthread_stop(ring->execlists_submit);
+
if (ring->buffer) {
struct drm_i915_private *dev_priv = ring->i915;
intel_logical_ring_stop(ring);
@@ -1550,13 +1553,15 @@ void intel_logical_ring_cleanup(struct intel_engine_cs *ring)

static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *ring)
{
+ struct drm_i915_private *dev_priv = to_i915(dev);
+ struct task_struct *task;
int ret;

/* Intentionally left blank. */
ring->buffer = NULL;

ring->dev = dev;
- ring->i915 = to_i915(dev);
+ ring->i915 = dev_priv;
ring->fence_context = fence_context_alloc(1);
INIT_LIST_HEAD(&ring->request_list);
i915_gem_batch_pool_init(dev, &ring->batch_pool);
@@ -1587,6 +1592,15 @@ static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *rin
goto error;
}

+ ring->next_context_status_buffer =
+ I915_READ(RING_CONTEXT_STATUS_PTR(ring)) & GEN8_CSB_PTR_MASK;
+ task = kthread_run(intel_execlists_submit, ring,
+ "irq/i915:%de", ring->id);
+ if (IS_ERR(task))
+ goto error;
+
+ ring->execlists_submit = task;
+
return 0;

error:
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index 87bc9acc4224..33f82a84065a 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -81,7 +81,6 @@ uint64_t intel_lr_context_descriptor(struct intel_context *ctx,
int intel_sanitize_enable_execlists(struct drm_device *dev, int enable_execlists);
u32 intel_execlists_ctx_id(struct drm_i915_gem_object *ctx_obj);

-void intel_lrc_irq_handler(struct intel_engine_cs *ring);
-void intel_execlists_retire_requests(struct intel_engine_cs *ring);
+bool intel_execlists_retire_requests(struct intel_engine_cs *ring);

#endif /* _INTEL_LRC_H_ */
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index bb92d831a100..edaf07b2292e 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -291,6 +291,7 @@ struct intel_engine_cs {
} semaphore;

/* Execlists */
+ struct task_struct *execlists_submit;
spinlock_t execlist_lock;
struct list_head execlist_queue;
struct list_head execlist_retired_req_list;
--
2.7.0.rc3
Chris Wilson
2016-01-11 10:44:33 UTC
Permalink
Other than dramatically simplifying the submission code (requests ftw),
we can reduce the execlist spinlock duration and importantly avoid
having to hold it across the context switch register reads.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 20 +-
drivers/gpu/drm/i915/i915_gem.c | 8 +-
drivers/gpu/drm/i915/i915_gem_request.h | 21 +-
drivers/gpu/drm/i915/i915_guc_submission.c | 31 +-
drivers/gpu/drm/i915/intel_lrc.c | 505 +++++++++++------------------
drivers/gpu/drm/i915/intel_lrc.h | 3 -
drivers/gpu/drm/i915/intel_ringbuffer.h | 8 +-
7 files changed, 209 insertions(+), 387 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 15a6fddfb79b..a5ea90944bbb 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -2005,8 +2005,7 @@ static void i915_dump_lrc_obj(struct seq_file *m,
return;
}

- seq_printf(m, "CONTEXT: %s %u\n", ring->name,
- intel_execlists_ctx_id(ctx_obj));
+ seq_printf(m, "CONTEXT: %s\n", ring->name);

if (!i915_gem_obj_ggtt_bound(ctx_obj))
seq_puts(m, "\tNot bound in GGTT\n");
@@ -2092,7 +2091,6 @@ static int i915_execlists(struct seq_file *m, void *data)
intel_runtime_pm_get(dev_priv);

for_each_ring(ring, dev_priv, ring_id) {
- struct drm_i915_gem_request *head_req = NULL;
int count = 0;

seq_printf(m, "%s\n", ring->name);
@@ -2105,8 +2103,8 @@ static int i915_execlists(struct seq_file *m, void *data)
status_pointer = I915_READ(RING_CONTEXT_STATUS_PTR(ring));
seq_printf(m, "\tStatus pointer: 0x%08X\n", status_pointer);

- read_pointer = ring->next_context_status_buffer;
- write_pointer = GEN8_CSB_WRITE_PTR(status_pointer);
+ read_pointer = (status_pointer >> 8) & GEN8_CSB_PTR_MASK;
+ write_pointer = status_pointer & GEN8_CSB_PTR_MASK;
if (read_pointer > write_pointer)
write_pointer += GEN8_CSB_ENTRIES;
seq_printf(m, "\tRead pointer: 0x%08X, write pointer 0x%08X\n",
@@ -2123,21 +2121,9 @@ static int i915_execlists(struct seq_file *m, void *data)
spin_lock(&ring->execlist_lock);
list_for_each(cursor, &ring->execlist_queue)
count++;
- head_req = list_first_entry_or_null(&ring->execlist_queue,
- struct drm_i915_gem_request, execlist_link);
spin_unlock(&ring->execlist_lock);

seq_printf(m, "\t%d requests in queue\n", count);
- if (head_req) {
- struct drm_i915_gem_object *ctx_obj;
-
- ctx_obj = head_req->ctx->engine[ring_id].state;
- seq_printf(m, "\tHead request id: %u\n",
- intel_execlists_ctx_id(ctx_obj));
- seq_printf(m, "\tHead request tail: %u\n",
- head_req->tail);
- }
-
seq_putc(m, '\n');
}

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index eb875ecd7907..054e11cff00f 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2193,12 +2193,12 @@ static void i915_gem_reset_ring_cleanup(struct intel_engine_cs *engine)

if (i915.enable_execlists) {
spin_lock(&engine->execlist_lock);
-
- /* list_splice_tail_init checks for empty lists */
list_splice_tail_init(&engine->execlist_queue,
- &engine->execlist_retired_req_list);
-
+ &engine->execlist_completed);
+ memset(&engine->execlist_port, 0,
+ sizeof(engine->execlist_port));
spin_unlock(&engine->execlist_lock);
+
intel_execlists_retire_requests(engine);
}

diff --git a/drivers/gpu/drm/i915/i915_gem_request.h b/drivers/gpu/drm/i915/i915_gem_request.h
index 59957d5edfdb..c2e83584f8a2 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.h
+++ b/drivers/gpu/drm/i915/i915_gem_request.h
@@ -63,10 +63,11 @@ struct drm_i915_gem_request {
* This is required to calculate the maximum available ringbuffer
* space without overwriting the postfix.
*/
- u32 postfix;
+ u32 postfix;

/** Position in the ringbuffer of the end of the whole request */
u32 tail;
+ u32 wa_tail;

/**
* Context and ring buffer related to this request
@@ -99,24 +100,8 @@ struct drm_i915_gem_request {
/** process identifier submitting this request */
struct pid *pid;

- /**
- * The ELSP only accepts two elements at a time, so we queue
- * context/tail pairs on a given queue (ring->execlist_queue) until the
- * hardware is available. The queue serves a double purpose: we also use
- * it to keep track of the up to 2 contexts currently in the hardware
- * (usually one in execution and the other queued up by the GPU): We
- * only remove elements from the head of the queue when the hardware
- * informs us that an element has been completed.
- *
- * All accesses to the queue are mediated by a spinlock
- * (ring->execlist_lock).
- */
-
/** Execlist link in the submission queue.*/
- struct list_head execlist_link;
-
- /** Execlists no. of times this request has been sent to the ELSP */
- int elsp_submitted;
+ struct list_head execlist_link; /* guarded by engine->execlist_lock */
};

struct drm_i915_gem_request *
diff --git a/drivers/gpu/drm/i915/i915_guc_submission.c b/drivers/gpu/drm/i915/i915_guc_submission.c
index 5a6251926367..f4e09952d52c 100644
--- a/drivers/gpu/drm/i915/i915_guc_submission.c
+++ b/drivers/gpu/drm/i915/i915_guc_submission.c
@@ -393,7 +393,6 @@ static void guc_init_ctx_desc(struct intel_guc *guc,
struct intel_ring *ring = ctx->engine[i].ring;
struct intel_engine_cs *engine;
struct drm_i915_gem_object *obj;
- uint64_t ctx_desc;

/* TODO: We have a design issue to be solved here. Only when we
* receive the first batch, we know which engine is used by the
@@ -407,8 +406,7 @@ static void guc_init_ctx_desc(struct intel_guc *guc,
break; /* XXX: continue? */

engine = ring->engine;
- ctx_desc = intel_lr_context_descriptor(ctx, engine);
- lrc->context_desc = (u32)ctx_desc;
+ lrc->context_desc = engine->execlist_context_descriptor;

/* The state page is after PPHWSP */
lrc->ring_lcra = i915_gem_obj_ggtt_offset(obj) +
@@ -548,7 +546,7 @@ static int guc_add_workqueue_item(struct i915_guc_client *gc,
WQ_NO_WCFLUSH_WAIT;

/* The GuC wants only the low-order word of the context descriptor */
- wqi->context_desc = (u32)intel_lr_context_descriptor(rq->ctx, rq->engine);
+ wqi->context_desc = rq->engine->execlist_context_descriptor;

/* The GuC firmware wants the tail index in QWords, not bytes */
tail = rq->ring->tail >> 3;
@@ -562,27 +560,6 @@ static int guc_add_workqueue_item(struct i915_guc_client *gc,

#define CTX_RING_BUFFER_START 0x08

-/* Update the ringbuffer pointer in a saved context image */
-static void lr_context_update(struct drm_i915_gem_request *rq)
-{
- enum intel_engine_id ring_id = rq->engine->id;
- struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[ring_id].state;
- struct drm_i915_gem_object *rb_obj = rq->ring->obj;
- struct page *page;
- uint32_t *reg_state;
-
- BUG_ON(!ctx_obj);
- WARN_ON(!i915_gem_obj_is_pinned(ctx_obj));
- WARN_ON(!i915_gem_obj_is_pinned(rb_obj));
-
- page = i915_gem_object_get_dirty_page(ctx_obj, LRC_STATE_PN);
- reg_state = kmap_atomic(page);
-
- reg_state[CTX_RING_BUFFER_START+1] = i915_gem_obj_ggtt_offset(rb_obj);
-
- kunmap_atomic(reg_state);
-}
-
/**
* i915_guc_submit() - Submit commands through GuC
* @client: the guc client where commands will go through
@@ -597,10 +574,6 @@ int i915_guc_submit(struct i915_guc_client *client,
enum intel_engine_id ring_id = rq->engine->id;
int q_ret, b_ret;

- /* Need this because of the deferred pin ctx and ring */
- /* Shall we move this right after ring is pinned? */
- lr_context_update(rq);
-
q_ret = guc_add_workqueue_item(client, rq);
if (q_ret == 0)
b_ret = guc_ring_doorbell(client);
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index de5889e95d6d..80b346a3fd8a 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -265,233 +265,133 @@ int intel_sanitize_enable_execlists(struct drm_device *dev, int enable_execlists
return 0;
}

-/**
- * intel_execlists_ctx_id() - get the Execlists Context ID
- * @ctx_obj: Logical Ring Context backing object.
- *
- * Do not confuse with ctx->id! Unfortunately we have a name overload
- * here: the old context ID we pass to userspace as a handler so that
- * they can refer to a context, and the new context ID we pass to the
- * ELSP so that the GPU can inform us of the context status via
- * interrupts.
- *
- * Return: 20-bits globally unique context ID.
- */
-u32 intel_execlists_ctx_id(struct drm_i915_gem_object *ctx_obj)
-{
- u32 lrca = i915_gem_obj_ggtt_offset(ctx_obj) +
- LRC_PPHWSP_PN * PAGE_SIZE;
-
- /* LRCA is required to be 4K aligned so the more significant 20 bits
- * are globally unique */
- return lrca >> 12;
-}
-
-static bool disable_lite_restore_wa(struct intel_engine_cs *ring)
-{
- return (IS_SKL_REVID(ring->dev, 0, SKL_REVID_B0) ||
- IS_BXT_REVID(ring->dev, 0, BXT_REVID_A1)) &&
- (ring->id == VCS || ring->id == VCS2);
-}
-
-uint64_t intel_lr_context_descriptor(struct intel_context *ctx,
- struct intel_engine_cs *ring)
+static u32 execlists_request_write_tail(struct drm_i915_gem_request *req)
{
- struct drm_i915_gem_object *ctx_obj = ctx->engine[ring->id].state;
- uint64_t desc;
- uint64_t lrca = i915_gem_obj_ggtt_offset(ctx_obj) +
- LRC_PPHWSP_PN * PAGE_SIZE;
-
- WARN_ON(lrca & 0xFFFFFFFF00000FFFULL);
-
- desc = GEN8_CTX_VALID;
- desc |= GEN8_CTX_ADDRESSING_MODE(ring->i915) << GEN8_CTX_ADDRESSING_MODE_SHIFT;
- if (IS_GEN8(ring->i915))
- desc |= GEN8_CTX_L3LLC_COHERENT;
- desc |= GEN8_CTX_PRIVILEGE;
- desc |= lrca;
- desc |= (u64)intel_execlists_ctx_id(ctx_obj) << GEN8_CTX_ID_SHIFT;
-
- /* TODO: WaDisableLiteRestore when we start using semaphore
- * signalling between Command Streamers */
- /* desc |= GEN8_CTX_FORCE_RESTORE; */
+ struct intel_ring *ring = req->ring;
+ struct i915_hw_ppgtt *ppgtt = req->ctx->ppgtt;

- /* WaEnableForceRestoreInCtxtDescForVCS:skl */
- /* WaEnableForceRestoreInCtxtDescForVCS:bxt */
- if (disable_lite_restore_wa(ring))
- desc |= GEN8_CTX_FORCE_RESTORE;
+ if (ppgtt && !USES_FULL_48BIT_PPGTT(req->i915)) {
+ /* True 32b PPGTT with dynamic page allocation: update PDP
+ * registers and point the unallocated PDPs to scratch page.
+ * PML4 is allocated during ppgtt init, so this is not needed
+ * in 48-bit mode.
+ */
+ if (ppgtt->pd_dirty_rings & intel_engine_flag(req->engine)) {
+ ASSIGN_CTX_PDP(ppgtt, ring->registers, 3);
+ ASSIGN_CTX_PDP(ppgtt, ring->registers, 2);
+ ASSIGN_CTX_PDP(ppgtt, ring->registers, 1);
+ ASSIGN_CTX_PDP(ppgtt, ring->registers, 0);
+ ppgtt->pd_dirty_rings &= ~intel_engine_flag(req->engine);
+ }
+ }

- return desc;
+ ring->registers[CTX_RING_TAIL+1] = req->tail;
+ return ring->context_descriptor;
}

-static void execlists_elsp_write(struct drm_i915_gem_request *rq0,
- struct drm_i915_gem_request *rq1)
+static void execlists_submit_pair(struct intel_engine_cs *ring)
{
+ struct drm_i915_private *dev_priv = ring->i915;
+ uint32_t desc[4];

- struct intel_engine_cs *engine = rq0->engine;
- struct drm_i915_private *dev_priv = rq0->i915;
- uint64_t desc[2];
-
- if (rq1) {
- desc[1] = intel_lr_context_descriptor(rq1->ctx, rq1->engine);
- rq1->elsp_submitted++;
- } else {
- desc[1] = 0;
- }
+ if (ring->execlist_port[1]) {
+ desc[0] = execlists_request_write_tail(ring->execlist_port[1]);
+ desc[1] = ring->execlist_port[1]->fence.seqno;
+ } else
+ desc[1] = desc[0] = 0;

- desc[0] = intel_lr_context_descriptor(rq0->ctx, rq0->engine);
- rq0->elsp_submitted++;
+ desc[2] = execlists_request_write_tail(ring->execlist_port[0]);
+ desc[3] = ring->execlist_port[0]->fence.seqno;

- /* You must always write both descriptors in the order below. */
- spin_lock_irq(&dev_priv->uncore.lock);
- intel_uncore_forcewake_get__locked(dev_priv, FORCEWAKE_ALL);
- I915_WRITE_FW(RING_ELSP(engine), upper_32_bits(desc[1]));
- I915_WRITE_FW(RING_ELSP(engine), lower_32_bits(desc[1]));
+ /* Note: You must always write both descriptors in the order below. */
+ I915_WRITE_FW(RING_ELSP(ring), desc[1]);
+ I915_WRITE_FW(RING_ELSP(ring), desc[0]);
+ I915_WRITE_FW(RING_ELSP(ring), desc[3]);

- I915_WRITE_FW(RING_ELSP(engine), upper_32_bits(desc[0]));
/* The context is automatically loaded after the following */
- I915_WRITE_FW(RING_ELSP(engine), lower_32_bits(desc[0]));
-
- /* ELSP is a wo register, use another nearby reg for posting */
- POSTING_READ_FW(RING_EXECLIST_STATUS_LO(engine));
- intel_uncore_forcewake_put__locked(dev_priv, FORCEWAKE_ALL);
- spin_unlock_irq(&dev_priv->uncore.lock);
+ I915_WRITE_FW(RING_ELSP(ring), desc[2]);
}

-static int execlists_update_context(struct drm_i915_gem_request *rq)
+static void execlists_context_unqueue(struct intel_engine_cs *engine)
{
- struct i915_hw_ppgtt *ppgtt = rq->ctx->ppgtt;
- struct drm_i915_gem_object *ctx_obj = rq->ctx->engine[rq->engine->id].state;
- struct drm_i915_gem_object *rb_obj = rq->ring->obj;
- struct page *page;
- uint32_t *reg_state;
-
- BUG_ON(!ctx_obj);
- WARN_ON(!i915_gem_obj_is_pinned(ctx_obj));
- WARN_ON(!i915_gem_obj_is_pinned(rb_obj));
-
- page = i915_gem_object_get_dirty_page(ctx_obj, LRC_STATE_PN);
- reg_state = kmap_atomic(page);
+ struct drm_i915_gem_request *cursor;
+ bool submit = false;
+ int port = 0;

- reg_state[CTX_RING_TAIL+1] = rq->tail;
- reg_state[CTX_RING_BUFFER_START+1] = i915_gem_obj_ggtt_offset(rb_obj);
+ assert_spin_locked(&engine->execlist_lock);

- if (ppgtt && !USES_FULL_48BIT_PPGTT(rq->i915)) {
- /* True 32b PPGTT with dynamic page allocation: update PDP
- * registers and point the unallocated PDPs to scratch page.
- * PML4 is allocated during ppgtt init, so this is not needed
- * in 48-bit mode.
+ /* Try to read in pairs and fill both submission ports */
+ cursor = engine->execlist_port[port];
+ if (cursor != NULL) {
+ /* WaIdleLiteRestore:bdw,skl
+ * Apply the wa NOOPs to prevent ring:HEAD == req:TAIL
+ * as we resubmit the request. See gen8_emit_request()
+ * for where we prepare the padding after the end of the
+ * request.
*/
- ASSIGN_CTX_PDP(ppgtt, reg_state, 3);
- ASSIGN_CTX_PDP(ppgtt, reg_state, 2);
- ASSIGN_CTX_PDP(ppgtt, reg_state, 1);
- ASSIGN_CTX_PDP(ppgtt, reg_state, 0);
- }
-
- kunmap_atomic(reg_state);
-
- return 0;
-}
+ cursor->tail = cursor->wa_tail;
+ cursor = list_next_entry(cursor, execlist_link);
+ } else
+ cursor = list_first_entry(&engine->execlist_queue,
+ typeof(*cursor),
+ execlist_link);
+ while (&cursor->execlist_link != &engine->execlist_queue) {
+ /* Same ctx: ignore earlier request, as the
+ * second request extends the first.
+ */
+ if (engine->execlist_port[port] &&
+ cursor->ctx != engine->execlist_port[port]->ctx) {
+ if (++port == ARRAY_SIZE(engine->execlist_port))
+ break;
+ }

-static void execlists_submit_requests(struct drm_i915_gem_request *rq0,
- struct drm_i915_gem_request *rq1)
-{
- execlists_update_context(rq0);
+ engine->execlist_port[port] = cursor;
+ submit = true;

- if (rq1)
- execlists_update_context(rq1);
+ cursor = list_next_entry(cursor, execlist_link);
+ }

- execlists_elsp_write(rq0, rq1);
+ if (submit)
+ execlists_submit_pair(engine);
}

-static void execlists_context_unqueue(struct intel_engine_cs *engine)
+static bool execlists_complete_requests(struct intel_engine_cs *engine,
+ u32 seqno)
{
- struct drm_i915_gem_request *req0 = NULL, *req1 = NULL;
- struct drm_i915_gem_request *cursor = NULL, *tmp = NULL;
-
assert_spin_locked(&engine->execlist_lock);

- /*
- * If irqs are not active generate a warning as batches that finish
- * without the irqs may get lost and a GPU Hang may occur.
- */
- WARN_ON(!intel_irqs_enabled(engine->dev->dev_private));
+ do {
+ struct drm_i915_gem_request *req;

- if (list_empty(&engine->execlist_queue))
- return;
+ req = engine->execlist_port[0];
+ if (req == NULL)
+ break;

- /* Try to read in pairs */
- list_for_each_entry_safe(cursor, tmp, &engine->execlist_queue,
- execlist_link) {
- if (!req0) {
- req0 = cursor;
- } else if (req0->ctx == cursor->ctx) {
- /* Same ctx: ignore first request, as second request
- * will update tail past first request's workload */
- cursor->elsp_submitted = req0->elsp_submitted;
- list_del(&req0->execlist_link);
- list_add_tail(&req0->execlist_link,
- &engine->execlist_retired_req_list);
- req0 = cursor;
- } else {
- req1 = cursor;
+ if (!i915_seqno_passed(seqno, req->fence.seqno))
break;
- }
- }

- if (IS_GEN8(engine->dev) || IS_GEN9(engine->dev)) {
- /*
- * WaIdleLiteRestore: make sure we never cause a lite
- * restore with HEAD==TAIL
+ /* Move the completed set of requests from the start of the
+ * execlist_queue over to the tail of the execlist_completed.
*/
- if (req0->elsp_submitted) {
- /*
- * Apply the wa NOOPS to prevent ring:HEAD == req:TAIL
- * as we resubmit the request. See gen8_add_request()
- * for where we prepare the padding after the end of the
- * request.
- */
- struct intel_ring *ring;
-
- ring = req0->ctx->engine[engine->id].ring;
- req0->tail += 8;
- req0->tail &= ring->size - 1;
- }
- }
-
- WARN_ON(req1 && req1->elsp_submitted);
+ engine->execlist_completed.prev->next = engine->execlist_queue.next;
+ engine->execlist_completed.prev = &req->execlist_link;

- execlists_submit_requests(req0, req1);
-}
-
-static bool execlists_check_remove_request(struct intel_engine_cs *ring,
- u32 request_id)
-{
- struct drm_i915_gem_request *head_req;
+ engine->execlist_queue.next = req->execlist_link.next;
+ req->execlist_link.next->prev = &engine->execlist_queue;

- assert_spin_locked(&ring->execlist_lock);
+ req->execlist_link.next = &engine->execlist_completed;

- head_req = list_first_entry_or_null(&ring->execlist_queue,
- struct drm_i915_gem_request,
- execlist_link);
-
- if (head_req != NULL) {
- struct drm_i915_gem_object *ctx_obj =
- head_req->ctx->engine[ring->id].state;
- if (intel_execlists_ctx_id(ctx_obj) == request_id) {
- WARN(head_req->elsp_submitted == 0,
- "Never submitted head request\n");
-
- if (--head_req->elsp_submitted <= 0) {
- list_del(&head_req->execlist_link);
- list_add_tail(&head_req->execlist_link,
- &ring->execlist_retired_req_list);
- return true;
- }
- }
- }
+ /* The hardware has completed the request on this port, it
+ * will switch to the next.
+ */
+ engine->execlist_port[0] = engine->execlist_port[1];
+ engine->execlist_port[1] = NULL;
+ } while (1);

- return false;
+ if (engine->execlist_context_descriptor & GEN8_CTX_FORCE_RESTORE)
+ return engine->execlist_port[0] == NULL;
+ else
+ return engine->execlist_port[1] == NULL;
}

static void set_rtpriority(void)
@@ -504,23 +404,29 @@ static int intel_execlists_submit(void *arg)
{
struct intel_engine_cs *ring = arg;
struct drm_i915_private *dev_priv = ring->i915;
+ const i915_reg_t ptrs = RING_CONTEXT_STATUS_PTR(ring);

set_rtpriority();

+ intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
do {
- u32 status;
- u32 status_id;
- u32 submit_contexts;
u8 head, tail;
+ u32 seqno;

set_current_state(TASK_INTERRUPTIBLE);
- head = ring->next_context_status_buffer;
- tail = I915_READ(RING_CONTEXT_STATUS_PTR(ring)) & GEN8_CSB_PTR_MASK;
+ head = tail = 0;
+ if (READ_ONCE(ring->execlist_port[0])) {
+ u32 x = I915_READ_FW(ptrs);
+ head = x >> 8;
+ tail = x;
+ }
if (head == tail) {
+ intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
if (kthread_should_stop())
return 0;

schedule();
+ intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
continue;
}
__set_current_state(TASK_RUNNING);
@@ -528,86 +434,46 @@ static int intel_execlists_submit(void *arg)
if (head > tail)
tail += GEN8_CSB_ENTRIES;

- status = 0;
- submit_contexts = 0;
-
- spin_lock(&ring->execlist_lock);
-
+ seqno = 0;
while (head++ < tail) {
- status = I915_READ(RING_CONTEXT_STATUS_BUF_LO(ring, head % GEN8_CSB_ENTRIES));
- status_id = I915_READ(RING_CONTEXT_STATUS_BUF_HI(ring, head % GEN8_CSB_ENTRIES));
-
- if (status & GEN8_CTX_STATUS_IDLE_ACTIVE)
- continue;
-
- if (status & GEN8_CTX_STATUS_PREEMPTED) {
- if (status & GEN8_CTX_STATUS_LITE_RESTORE) {
- if (execlists_check_remove_request(ring, status_id))
- WARN(1, "Lite Restored request removed from queue\n");
- } else
- WARN(1, "Preemption without Lite Restore\n");
- }
-
- if ((status & GEN8_CTX_STATUS_ACTIVE_IDLE) ||
- (status & GEN8_CTX_STATUS_ELEMENT_SWITCH)) {
- if (execlists_check_remove_request(ring, status_id))
- submit_contexts++;
+ u32 status = I915_READ_FW(RING_CONTEXT_STATUS_BUF_LO(ring,
+ head % GEN8_CSB_ENTRIES));
+ if (unlikely(status & GEN8_CTX_STATUS_PREEMPTED && 0)) {
+ DRM_ERROR("Pre-empted request %x %s Lite Restore\n",
+ I915_READ_FW(RING_CONTEXT_STATUS_BUF_HI(ring, head % GEN8_CSB_ENTRIES)),
+ status & GEN8_CTX_STATUS_LITE_RESTORE ? "with" : "without");
}
+ if (status & (GEN8_CTX_STATUS_ACTIVE_IDLE |
+ GEN8_CTX_STATUS_ELEMENT_SWITCH))
+ seqno = I915_READ_FW(RING_CONTEXT_STATUS_BUF_HI(ring,
+ head % GEN8_CSB_ENTRIES));
}

- if (disable_lite_restore_wa(ring)) {
- /* Prevent a ctx to preempt itself */
- if ((status & GEN8_CTX_STATUS_ACTIVE_IDLE) &&
- (submit_contexts != 0))
+ I915_WRITE_FW(ptrs,
+ _MASKED_FIELD(GEN8_CSB_PTR_MASK<<8,
+ (tail % GEN8_CSB_ENTRIES) << 8));
+
+ if (seqno) {
+ spin_lock(&ring->execlist_lock);
+ if (execlists_complete_requests(ring, seqno))
execlists_context_unqueue(ring);
- } else if (submit_contexts != 0) {
- execlists_context_unqueue(ring);
+ spin_unlock(&ring->execlist_lock);
}
-
- spin_unlock(&ring->execlist_lock);
-
- WARN(submit_contexts > 2, "More than two context complete events?\n");
- ring->next_context_status_buffer = tail % GEN8_CSB_ENTRIES;
- I915_WRITE(RING_CONTEXT_STATUS_PTR(ring),
- _MASKED_FIELD(GEN8_CSB_PTR_MASK << 8,
- ring->next_context_status_buffer<<8));
} while (1);
}

static int execlists_context_queue(struct drm_i915_gem_request *request)
{
struct intel_engine_cs *engine = request->engine;
- struct drm_i915_gem_request *cursor;
- int num_elements = 0;

i915_gem_request_get(request);

spin_lock(&engine->execlist_lock);
-
- list_for_each_entry(cursor, &engine->execlist_queue, execlist_link)
- if (++num_elements > 2)
- break;
-
- if (num_elements > 2) {
- struct drm_i915_gem_request *tail_req;
-
- tail_req = list_last_entry(&engine->execlist_queue,
- struct drm_i915_gem_request,
- execlist_link);
-
- if (request->ctx == tail_req->ctx) {
- WARN(tail_req->elsp_submitted != 0,
- "More than 2 already-submitted reqs queued\n");
- list_del(&tail_req->execlist_link);
- list_add_tail(&tail_req->execlist_link,
- &engine->execlist_retired_req_list);
- }
- }
-
list_add_tail(&request->execlist_link, &engine->execlist_queue);
- if (num_elements == 0)
- execlists_context_unqueue(engine);
-
+ if (engine->execlist_port[0] == NULL) {
+ engine->execlist_port[0] = request;
+ execlists_submit_pair(engine);
+ }
spin_unlock(&engine->execlist_lock);

return 0;
@@ -641,56 +507,32 @@ int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request
return 0;
}

-/*
- * intel_logical_ring_advance_and_submit() - advance the tail and submit the workload
- * @request: Request to advance the logical ringbuffer of.
- *
- * The tail is updated in our logical ringbuffer struct, not in the actual context. What
- * really happens during submission is that the context and current tail will be placed
- * on a queue waiting for the ELSP to be ready to accept a new context submission. At that
- * point, the tail *inside* the context is updated and the ELSP written to.
- */
-static void
-intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
-{
- struct drm_i915_private *dev_priv = request->i915;
-
- intel_ring_advance(request->ring);
- request->tail = request->ring->tail;
-
- if (dev_priv->guc.execbuf_client)
- i915_guc_submit(dev_priv->guc.execbuf_client, request);
- else
- execlists_context_queue(request);
-}
-
bool intel_execlists_retire_requests(struct intel_engine_cs *ring)
{
struct drm_i915_gem_request *req, *tmp;
- struct list_head retired_list;
+ struct list_head list;

- WARN_ON(!mutex_is_locked(&ring->dev->struct_mutex));
- if (list_empty(&ring->execlist_retired_req_list))
+ lockdep_assert_held(&ring->dev->struct_mutex);
+ if (list_empty(&ring->execlist_completed))
goto out;

- INIT_LIST_HEAD(&retired_list);
spin_lock(&ring->execlist_lock);
- list_replace_init(&ring->execlist_retired_req_list, &retired_list);
+ list_replace_init(&ring->execlist_completed, &list);
spin_unlock(&ring->execlist_lock);

- list_for_each_entry_safe(req, tmp, &retired_list, execlist_link) {
+ list_for_each_entry_safe(req, tmp, &list, execlist_link) {
struct intel_context *ctx = req->ctx;
struct drm_i915_gem_object *ctx_obj =
ctx->engine[ring->id].state;

if (ctx_obj && (ctx != ring->default_context))
intel_lr_context_unpin(req);
- list_del(&req->execlist_link);
+
i915_gem_request_put(req);
}

out:
- return list_empty(&ring->execlist_queue);
+ return READ_ONCE(ring->execlist_port[0]) == NULL;
}

void intel_logical_ring_stop(struct intel_engine_cs *ring)
@@ -720,6 +562,7 @@ static int intel_lr_context_do_pin(struct intel_engine_cs *ring,
struct intel_ring *ringbuf)
{
struct drm_i915_private *dev_priv = ring->i915;
+ u32 ggtt_offset;
int ret = 0;

WARN_ON(!mutex_is_locked(&ring->dev->struct_mutex));
@@ -734,6 +577,16 @@ static int intel_lr_context_do_pin(struct intel_engine_cs *ring,

ctx_obj->dirty = true;

+ ggtt_offset =
+ i915_gem_obj_ggtt_offset(ctx_obj) + LRC_PPHWSP_PN * PAGE_SIZE;
+ ringbuf->context_descriptor =
+ ggtt_offset | ring->execlist_context_descriptor;
+
+ ringbuf->registers =
+ kmap(i915_gem_object_get_dirty_page(ctx_obj, LRC_STATE_PN));
+ ringbuf->registers[CTX_RING_BUFFER_START+1] =
+ i915_gem_obj_ggtt_offset(ringbuf->obj);
+
/* Invalidate GuC TLB. */
if (i915.enable_guc_submission)
I915_WRITE(GEN8_GTCR, GEN8_GTCR_INVALIDATE);
@@ -768,6 +621,7 @@ static int intel_lr_context_pin(struct drm_i915_gem_request *rq)

void intel_lr_context_unpin(struct drm_i915_gem_request *rq)
{
+ struct drm_i915_gem_object *ctx_obj;
int engine = rq->engine->id;

WARN_ON(!mutex_is_locked(&rq->i915->dev->struct_mutex));
@@ -775,7 +629,10 @@ void intel_lr_context_unpin(struct drm_i915_gem_request *rq)
return;

intel_ring_unmap(rq->ring);
- i915_gem_object_ggtt_unpin(rq->ctx->engine[engine].state);
+
+ ctx_obj = rq->ctx->engine[engine].state;
+ kunmap(i915_gem_object_get_page(ctx_obj, LRC_STATE_PN));
+ i915_gem_object_ggtt_unpin(ctx_obj);
i915_gem_context_unreference(rq->ctx);
}

@@ -1168,12 +1025,39 @@ out:
return ret;
}

+static bool disable_lite_restore_wa(struct intel_engine_cs *ring)
+{
+ return (IS_SKL_REVID(ring->i915, 0, SKL_REVID_B0) ||
+ IS_BXT_REVID(ring->i915, 0, BXT_REVID_A1)) &&
+ (ring->id == VCS || ring->id == VCS2);
+}
+
+static uint64_t lr_context_descriptor(struct intel_engine_cs *ring)
+{
+ uint64_t desc;
+
+ desc = GEN8_CTX_VALID;
+ desc |= GEN8_CTX_ADDRESSING_MODE(ring->i915) << GEN8_CTX_ADDRESSING_MODE_SHIFT;
+ if (IS_GEN8(ring->i915))
+ desc |= GEN8_CTX_L3LLC_COHERENT;
+ desc |= GEN8_CTX_PRIVILEGE;
+
+ /* TODO: WaDisableLiteRestore when we start using semaphore
+ * signalling between Command Streamers */
+ /* desc |= GEN8_CTX_FORCE_RESTORE; */
+
+ /* WaEnableForceRestoreInCtxtDescForVCS:skl */
+ /* WaEnableForceRestoreInCtxtDescForVCS:bxt */
+ if (disable_lite_restore_wa(ring))
+ desc |= GEN8_CTX_FORCE_RESTORE;
+
+ return desc;
+}
+
static int gen8_init_common_ring(struct intel_engine_cs *ring)
{
struct drm_device *dev = ring->dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- u8 next_context_status_buffer_hw;
-
lrc_setup_hardware_status_page(ring,
ring->default_context->engine[ring->id].state);

@@ -1197,18 +1081,6 @@ static int gen8_init_common_ring(struct intel_engine_cs *ring)
* SKL | ? | ? |
* BXT | ? | ? |
*/
- next_context_status_buffer_hw =
- GEN8_CSB_WRITE_PTR(I915_READ(RING_CONTEXT_STATUS_PTR(ring)));
-
- /*
- * When the CSB registers are reset (also after power-up / gpu reset),
- * CSB write pointer is set to all 1's, which is not valid, use '5' in
- * this special case, so the first element read is CSB[0].
- */
- if (next_context_status_buffer_hw == GEN8_CSB_PTR_MASK)
- next_context_status_buffer_hw = (GEN8_CSB_ENTRIES - 1);
-
- ring->next_context_status_buffer = next_context_status_buffer_hw;
DRM_DEBUG_DRIVER("Execlists enabled for %s\n", ring->name);

memset(&ring->hangcheck, 0, sizeof(ring->hangcheck));
@@ -1482,7 +1354,8 @@ static int gen8_add_request(struct drm_i915_gem_request *request)
intel_ring_emit(ring, request->fence.seqno);
intel_ring_emit(ring, MI_USER_INTERRUPT);
intel_ring_emit(ring, MI_NOOP);
- intel_logical_ring_advance_and_submit(request);
+ intel_ring_advance(ring);
+ request->tail = ring->tail;

/*
* Here we add two extra NOOPs as padding to avoid
@@ -1491,6 +1364,12 @@ static int gen8_add_request(struct drm_i915_gem_request *request)
intel_ring_emit(ring, MI_NOOP);
intel_ring_emit(ring, MI_NOOP);
intel_ring_advance(ring);
+ request->wa_tail = ring->tail;
+
+ if (request->i915->guc.execbuf_client)
+ i915_guc_submit(request->i915->guc.execbuf_client, request);
+ else
+ execlists_context_queue(request);

return 0;
}
@@ -1569,9 +1448,11 @@ static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *rin

INIT_LIST_HEAD(&ring->buffers);
INIT_LIST_HEAD(&ring->execlist_queue);
- INIT_LIST_HEAD(&ring->execlist_retired_req_list);
+ INIT_LIST_HEAD(&ring->execlist_completed);
spin_lock_init(&ring->execlist_lock);

+ ring->execlist_context_descriptor = lr_context_descriptor(ring);
+
ret = i915_cmd_parser_init_ring(ring);
if (ret)
goto error;
@@ -1592,8 +1473,6 @@ static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *rin
goto error;
}

- ring->next_context_status_buffer =
- I915_READ(RING_CONTEXT_STATUS_PTR(ring)) & GEN8_CSB_PTR_MASK;
task = kthread_run(intel_execlists_submit, ring,
"irq/i915:%de", ring->id);
if (IS_ERR(task))
@@ -1904,9 +1783,7 @@ populate_lr_context(struct intel_context *ctx, struct drm_i915_gem_object *ctx_o
CTX_CTRL_RS_CTX_ENABLE));
ASSIGN_CTX_REG(reg_state, CTX_RING_HEAD, RING_HEAD(ring->mmio_base), 0);
ASSIGN_CTX_REG(reg_state, CTX_RING_TAIL, RING_TAIL(ring->mmio_base), 0);
- /* Ring buffer start address is not known until the buffer is pinned.
- * It is written to the context image in execlists_update_context()
- */
+ /* Ring buffer start address is not known until the buffer is pinned. */
ASSIGN_CTX_REG(reg_state, CTX_RING_BUFFER_START, RING_START(ring->mmio_base), 0);
ASSIGN_CTX_REG(reg_state, CTX_RING_BUFFER_CONTROL, RING_CTL(ring->mmio_base),
((ringbuf->size - PAGE_SIZE) & RING_NR_PAGES) | RING_VALID);
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index 33f82a84065a..37601a35d5fc 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -74,12 +74,9 @@ int intel_lr_context_deferred_alloc(struct intel_context *ctx,
void intel_lr_context_unpin(struct drm_i915_gem_request *req);
void intel_lr_context_reset(struct drm_device *dev,
struct intel_context *ctx);
-uint64_t intel_lr_context_descriptor(struct intel_context *ctx,
- struct intel_engine_cs *ring);

/* Execlists */
int intel_sanitize_enable_execlists(struct drm_device *dev, int enable_execlists);
-u32 intel_execlists_ctx_id(struct drm_i915_gem_object *ctx_obj);

bool intel_execlists_retire_requests(struct intel_engine_cs *ring);

diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index edaf07b2292e..3d4d5711aea9 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -122,6 +122,9 @@ struct intel_ring {
* we can detect new retirements.
*/
u32 last_retired_head;
+
+ u32 context_descriptor;
+ u32 *registers;
};

struct intel_context;
@@ -293,9 +296,10 @@ struct intel_engine_cs {
/* Execlists */
struct task_struct *execlists_submit;
spinlock_t execlist_lock;
+ struct drm_i915_gem_request *execlist_port[2];
struct list_head execlist_queue;
- struct list_head execlist_retired_req_list;
- u8 next_context_status_buffer;
+ struct list_head execlist_completed;
+ u32 execlist_context_descriptor;
u32 irq_keep_mask; /* bitmask for interrupts that should not be masked */

/**
--
2.7.0.rc3
Chris Wilson
2016-01-11 10:44:37 UTC
Permalink
Currently, we always switch back to the kernel context (if available,
i.e. legacy HW contexts not execlists) whenever we try and idle the GPU.
We actually only require the switch when trying to evict everything (in
order to prevent fragmentation from placement of the currently active
context) from the global GTT, so move the forced switch into that one
callsite.

In the process, update the comments regarding mode of operation in
particular the distinction between evicting from the global GTT (which
may contain untracked items and transient global pins) and the
per-process GTT.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem.c | 14 ----
drivers/gpu/drm/i915/i915_gem_evict.c | 140 +++++++++++++++++++++-------------
2 files changed, 88 insertions(+), 66 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 054e11cff00f..989222eb107b 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2691,21 +2691,7 @@ int i915_gpu_idle(struct drm_device *dev)
struct intel_engine_cs *ring;
int ret, i;

- /* Flush everything onto the inactive list. */
for_each_ring(ring, dev_priv, i) {
- if (!i915.enable_execlists) {
- struct drm_i915_gem_request *req;
-
- req = i915_gem_request_alloc(ring, ring->default_context);
- if (IS_ERR(req))
- return PTR_ERR(req);
-
- ret = i915_switch_context(req);
- i915_add_request_no_flush(req);
- if (ret)
- return ret;
- }
-
ret = intel_engine_idle(ring);
if (ret)
return ret;
diff --git a/drivers/gpu/drm/i915/i915_gem_evict.c b/drivers/gpu/drm/i915/i915_gem_evict.c
index ea1f8d1bd228..b7bcc324a7a7 100644
--- a/drivers/gpu/drm/i915/i915_gem_evict.c
+++ b/drivers/gpu/drm/i915/i915_gem_evict.c
@@ -33,6 +33,36 @@
#include "intel_drv.h"
#include "i915_trace.h"

+static int switch_to_pinned_context(struct drm_i915_private *dev_priv)
+{
+ struct intel_engine_cs *ring;
+ int ret, i;
+ int count = 0;
+
+ if (i915.enable_execlists)
+ return 0;
+
+ for_each_ring(ring, dev_priv, i) {
+ struct drm_i915_gem_request *req;
+
+ if (ring->last_context == ring->default_context)
+ continue;
+
+ req = i915_gem_request_alloc(ring, ring->default_context);
+ if (IS_ERR(req))
+ return PTR_ERR(req);
+
+ ret = i915_switch_context(req);
+ i915_add_request_no_flush(req);
+ if (ret)
+ return ret;
+
+ count++;
+ }
+
+ return count;
+}
+
static bool
mark_free(struct i915_vma *vma, struct list_head *unwind)
{
@@ -76,37 +106,33 @@ i915_gem_evict_something(struct drm_device *dev, struct i915_address_space *vm,
unsigned long start, unsigned long end,
unsigned flags)
{
- struct list_head eviction_list, unwind_list;
- struct i915_vma *vma;
- int ret = 0;
- int pass = 0;
+ struct list_head eviction_list;
+ struct list_head *phases[] = {
+ &vm->inactive_list,
+ &vm->active_list,
+ NULL,
+ }, **phase;
+ struct i915_vma *vma, *next;
+ int ret;

trace_i915_gem_evict(dev, min_size, alignment, flags);

/*
* The goal is to evict objects and amalgamate space in LRU order.
* The oldest idle objects reside on the inactive list, which is in
- * retirement order. The next objects to retire are those on the (per
- * ring) active list that do not have an outstanding flush. Once the
- * hardware reports completion (the seqno is updated after the
- * batchbuffer has been finished) the clean buffer objects would
- * be retired to the inactive list. Any dirty objects would be added
- * to the tail of the flushing list. So after processing the clean
- * active objects we need to emit a MI_FLUSH to retire the flushing
- * list, hence the retirement order of the flushing list is in
- * advance of the dirty objects on the active lists.
+ * retirement order. The next objects to retire are those in flight,
+ * on the active list, again in retirement order.
*
* The retirement sequence is thus:
* 1. Inactive objects (already retired)
- * 2. Clean active objects
- * 3. Flushing list
- * 4. Dirty active objects.
+ * 2. Active objects (will stall on unbinding)
*
* On each list, the oldest objects lie at the HEAD with the freshest
* object on the TAIL.
*/

- INIT_LIST_HEAD(&unwind_list);
+search_again:
+ INIT_LIST_HEAD(&eviction_list);
if (start != 0 || end != vm->total) {
drm_mm_init_scan_with_range(&vm->mm, min_size,
alignment, cache_level,
@@ -114,26 +140,19 @@ i915_gem_evict_something(struct drm_device *dev, struct i915_address_space *vm,
} else
drm_mm_init_scan(&vm->mm, min_size, alignment, cache_level);

-search_again:
- /* First see if there is a large enough contiguous idle region... */
- list_for_each_entry(vma, &vm->inactive_list, vm_link) {
- if (mark_free(vma, &unwind_list))
- goto found;
- }
-
if (flags & PIN_NONBLOCK)
- goto none;
+ phases[1] = NULL;

- /* Now merge in the soon-to-be-expired objects... */
- list_for_each_entry(vma, &vm->active_list, vm_link) {
- if (mark_free(vma, &unwind_list))
- goto found;
- }
+ phase = phases;
+ do {
+ list_for_each_entry(vma, *phase, vm_link)
+ if (mark_free(vma, &eviction_list))
+ goto found;
+ } while (*++phase);

-none:
/* Nothing found, clean up and bail out! */
- while (!list_empty(&unwind_list)) {
- vma = list_first_entry(&unwind_list,
+ while (!list_empty(&eviction_list)) {
+ vma = list_first_entry(&eviction_list,
struct i915_vma,
exec_list);
ret = drm_mm_scan_remove_block(&vma->node);
@@ -143,13 +162,24 @@ none:
}

/* Can we unpin some objects such as idle hw contents,
- * or pending flips?
+ * or pending flips? But since only the GGTT has global entries
+ * such as scanouts, rinbuffers and contexts, we can skip the
+ * purge when inspecting per-process local address spaces.
*/
- if (flags & PIN_NONBLOCK)
+ if (!i915_is_ggtt(vm) || flags & PIN_NONBLOCK)
return -ENOSPC;

- /* Only idle the GPU and repeat the search once */
- if (pass++ == 0) {
+ /* Not everything in the GGTT is tracked via vma (otherwise we
+ * could evict as required with minimal stalling) so we are forced
+ * to idle the GPU and explicitly retire outstanding requests in
+ * the hopes that we can then remove contexts and the like only
+ * bound by their active reference.
+ */
+ ret = switch_to_pinned_context(to_i915(dev));
+ if (ret < 0)
+ return ret;
+
+ if (ret > 0) {
ret = i915_gpu_idle(dev);
if (ret)
return ret;
@@ -166,19 +196,16 @@ none:

found:
/* drm_mm doesn't allow any other other operations while
- * scanning, therefore store to be evicted objects on a
- * temporary list. */
- INIT_LIST_HEAD(&eviction_list);
- while (!list_empty(&unwind_list)) {
- vma = list_first_entry(&unwind_list,
- struct i915_vma,
- exec_list);
- if (drm_mm_scan_remove_block(&vma->node)) {
- list_move(&vma->exec_list, &eviction_list);
+ * scanning, therefore store to-be-evicted objects on a
+ * temporary list and take a reference for all before
+ * calling unbind (which may remove the active reference
+ * of any of our objects, thus corrupting the list).
+ */
+ list_for_each_entry_safe(vma, next, &eviction_list, exec_list) {
+ if (drm_mm_scan_remove_block(&vma->node))
drm_gem_object_reference(&vma->obj->base);
- continue;
- }
- list_del_init(&vma->exec_list);
+ else
+ list_del_init(&vma->exec_list);
}

/* Unbinding will emit any required flushes */
@@ -195,7 +222,6 @@ found:

drm_gem_object_unreference(obj);
}
-
return ret;
}

@@ -261,12 +287,22 @@ int i915_gem_evict_vm(struct i915_address_space *vm, bool do_idle)
trace_i915_gem_evict_vm(vm);

if (do_idle) {
+ /* Switch back to the default context in order to unpin
+ * the existing context objects. However, such objects only
+ * pin themselves inside the global GTT and performing the
+ * switch otherwise is ineffective.
+ */
+ if (i915_is_ggtt(vm)) {
+ ret = switch_to_pinned_context(to_i915(vm->dev));
+ if (ret)
+ return ret;
+ }
+
ret = i915_gpu_idle(vm->dev);
- if (ret)
+ if (ret < 0)
return ret;

i915_gem_retire_requests(vm->dev);
-
WARN_ON(!list_empty(&vm->active_list));
}
--
2.7.0.rc3
Chris Wilson
2016-01-11 10:44:40 UTC
Permalink
Now that the first request is simplified to a pure context enabling
request (i.e. any request will do the required initialisation as
appropriate), we can forgo explicitly sending that required during early
hw initialisation. The only reason we might want to do such is in
enabling power contexts, i.e. if it is actually required we should move
it to the asynchronous power management enabling task.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_drv.h | 1 -
drivers/gpu/drm/i915/i915_gem.c | 24 ------------------------
drivers/gpu/drm/i915/i915_gem_context.c | 21 ---------------------
3 files changed, 46 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 7dc3eed71eb3..be63eaf8764a 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2967,7 +2967,6 @@ int __must_check i915_gem_context_init(struct drm_device *dev);
void i915_gem_context_fini(struct drm_device *dev);
void i915_gem_context_reset(struct drm_device *dev);
int i915_gem_context_open(struct drm_device *dev, struct drm_file *file);
-int i915_gem_context_enable(struct drm_i915_gem_request *req);
void i915_gem_context_close(struct drm_device *dev, struct drm_file *file);
int i915_switch_context(struct drm_i915_gem_request *req);
struct intel_context *
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index d157ae1e5c2a..a0207b9d1aea 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4117,30 +4117,6 @@ i915_gem_init_hw(struct drm_device *dev)
}
}

- /* Now it is safe to go back round and do everything else: */
- for_each_ring(ring, dev_priv, i) {
- struct drm_i915_gem_request *req;
-
- WARN_ON(!ring->default_context);
-
- req = i915_gem_request_alloc(ring, ring->default_context);
- if (IS_ERR(req)) {
- ret = PTR_ERR(req);
- i915_gem_cleanup_ringbuffer(dev);
- goto out;
- }
-
- ret = i915_gem_context_enable(req);
- if (ret && ret != -EIO) {
- DRM_ERROR("Context enable ring #%d failed %d\n", i, ret);
- i915_gem_request_cancel(req);
- i915_gem_cleanup_ringbuffer(dev);
- goto out;
- }
-
- i915_add_request_no_flush(req);
- }
-
out:
intel_uncore_forcewake_put(dev_priv, FORCEWAKE_ALL);
return ret;
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 87f86017ab26..9f9892525945 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -459,27 +459,6 @@ void i915_gem_context_fini(struct drm_device *dev)
i915_gem_context_unreference(dctx);
}

-int i915_gem_context_enable(struct drm_i915_gem_request *req)
-{
- struct intel_engine_cs *engine = req->engine;
- int ret;
-
- if (i915.enable_execlists) {
- if (engine->init_context == NULL)
- return 0;
-
- ret = engine->init_context(req);
- } else
- ret = i915_switch_context(req);
-
- if (ret) {
- DRM_ERROR("ring init context: %d\n", ret);
- return ret;
- }
-
- return 0;
-}
-
static int context_idr_cleanup(int id, void *p, void *data)
{
struct intel_context *ctx = p;
--
2.7.0.rc3
Chris Wilson
2016-01-11 10:44:39 UTC
Permalink
The code to switch_mm() is already handled by i915_switch_context(), the
only difference required to setup the aliasing ppgtt is that we need to
emit te switch_mm() on the first context, i.e. when transitioning from
engine->last_context == NULL.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem.c | 8 --------
drivers/gpu/drm/i915/i915_gem_context.c | 10 +++++++---
drivers/gpu/drm/i915/i915_gem_gtt.c | 13 -------------
drivers/gpu/drm/i915/i915_gem_gtt.h | 1 -
4 files changed, 7 insertions(+), 25 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 379913221ab1..d157ae1e5c2a 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -4130,14 +4130,6 @@ i915_gem_init_hw(struct drm_device *dev)
goto out;
}

- ret = i915_ppgtt_init_ring(req);
- if (ret && ret != -EIO) {
- DRM_ERROR("PPGTT enable ring #%d failed %d\n", i, ret);
- i915_gem_request_cancel(req);
- i915_gem_cleanup_ringbuffer(dev);
- goto out;
- }
-
ret = i915_gem_context_enable(req);
if (ret && ret != -EIO) {
DRM_ERROR("Context enable ring #%d failed %d\n", i, ret);
diff --git a/drivers/gpu/drm/i915/i915_gem_context.c b/drivers/gpu/drm/i915/i915_gem_context.c
index 15e2e2abd72d..87f86017ab26 100644
--- a/drivers/gpu/drm/i915/i915_gem_context.c
+++ b/drivers/gpu/drm/i915/i915_gem_context.c
@@ -719,18 +719,22 @@ static int do_switch(struct drm_i915_gem_request *req)
*/
from = engine->last_context;

- if (needs_pd_load_pre(engine, to)) {
+ if (from == NULL || needs_pd_load_pre(engine, to)) {
+ struct i915_hw_ppgtt *ppgtt;
+
/* Older GENs and non render rings still want the load first,
* "PP_DCLV followed by PP_DIR_BASE register through Load
* Register Immediate commands in Ring Buffer before submitting
* a context."*/
trace_switch_mm(engine, to);
- ret = to->ppgtt->switch_mm(to->ppgtt, req);
+
+ ppgtt = to->ppgtt ?: req->i915->mm.aliasing_ppgtt;
+ ret = ppgtt->switch_mm(ppgtt, req);
if (ret)
goto unpin_out;

/* Doing a PD load always reloads the page dirs */
- to->ppgtt->pd_dirty_rings &= ~intel_engine_flag(engine);
+ ppgtt->pd_dirty_rings &= ~intel_engine_flag(engine);
}

if (engine->id != RCS) {
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index ad26c9e331aa..61ec8f28be72 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -2173,19 +2173,6 @@ int i915_ppgtt_init_hw(struct drm_device *dev)
return 0;
}

-int i915_ppgtt_init_ring(struct drm_i915_gem_request *req)
-{
- struct i915_hw_ppgtt *ppgtt = req->i915->mm.aliasing_ppgtt;
-
- if (i915.enable_execlists)
- return 0;
-
- if (!ppgtt)
- return 0;
-
- return ppgtt->switch_mm(ppgtt, req);
-}
-
struct i915_hw_ppgtt *
i915_ppgtt_create(struct drm_i915_private *dev_priv,
struct drm_i915_file_private *fpriv)
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index 6346d1786d41..bb3dd5fe1a3c 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -547,7 +547,6 @@ int i915_ppgtt_init(struct i915_hw_ppgtt *ppgtt,
struct drm_i915_private *dev_priv,
struct drm_i915_file_private *file_priv);
int i915_ppgtt_init_hw(struct drm_device *dev);
-int i915_ppgtt_init_ring(struct drm_i915_gem_request *req);
void i915_ppgtt_release(struct kref *kref);
struct i915_hw_ppgtt *i915_ppgtt_create(struct drm_i915_private *dev_priv,
struct drm_i915_file_private *fpriv);
--
2.7.0.rc3
Chris Wilson
2016-01-11 10:44:52 UTC
Permalink
During execbuffer we look up the i915_vma in order to reserver them in
the VM. However, we then do a double lookup of the vma in order to then
pin them, all because we lack the necessary interfaces to operate on
i915_vma.

v2: Tidy parameter lists to remove one level of redirection in the hot
path.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: Mika Kuoppala <***@intel.com>
---
drivers/gpu/drm/i915/i915_drv.h | 28 +++--
drivers/gpu/drm/i915/i915_gem.c | 159 ++++++++++++-----------------
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 127 +++++++++++------------
drivers/gpu/drm/i915/i915_gem_gtt.c | 3 -
4 files changed, 144 insertions(+), 173 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 7df6cfabe7fa..f6e508e5aa5b 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2680,6 +2680,11 @@ struct drm_i915_gem_object *i915_gem_object_create_from_data(
void i915_gem_close_object(struct drm_gem_object *gem, struct drm_file *file);
void i915_gem_free_object(struct drm_gem_object *obj);

+int __must_check
+i915_vma_pin(struct i915_vma *vma,
+ uint64_t size,
+ uint64_t alignment,
+ uint64_t flags);
/* Flags used by pin/bind&friends. */
#define PIN_MAPPABLE (1<<0)
#define PIN_NONBLOCK (1<<1)
@@ -2691,12 +2696,19 @@ void i915_gem_free_object(struct drm_gem_object *obj);
#define PIN_HIGH (1<<7)
#define PIN_OFFSET_FIXED (1<<8)
#define PIN_OFFSET_MASK (~4095)
-int __must_check
-i915_gem_object_pin(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- uint64_t size,
- uint32_t alignment,
- uint64_t flags);
+
+static inline void __i915_vma_unpin(struct i915_vma *vma)
+{
+ vma->pin_count--;
+}
+
+static inline void i915_vma_unpin(struct i915_vma *vma)
+{
+ GEM_BUG_ON(vma->pin_count == 0);
+ GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
+ __i915_vma_unpin(vma);
+}
+
int __must_check
i915_gem_object_ggtt_pin(struct drm_i915_gem_object *obj,
const struct i915_ggtt_view *view,
@@ -2933,8 +2945,8 @@ i915_gem_obj_ggtt_pin(struct drm_i915_gem_object *obj,
uint32_t alignment,
unsigned flags)
{
- return i915_gem_object_pin(obj, i915_obj_to_ggtt(obj), 0, alignment,
- flags | PIN_GLOBAL);
+ return i915_gem_object_ggtt_pin(obj, &i915_ggtt_view_normal,
+ 0, alignment, flags);
}

static inline int
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 0d4f358f4067..c6d7a78ab605 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2746,26 +2746,21 @@ static bool i915_gem_valid_gtt_space(struct i915_vma *vma,
* Finds free space in the GTT aperture and binds the object or a view of it
* there.
*/
-static struct i915_vma *
-i915_gem_object_insert_into_vm(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- const struct i915_ggtt_view *ggtt_view,
- uint64_t size,
- uint64_t alignment,
- uint64_t flags)
+static int
+i915_vma_insert(struct i915_vma *vma,
+ uint64_t size,
+ uint64_t alignment,
+ uint64_t flags)
{
+ struct drm_i915_gem_object *obj = vma->obj;
struct drm_device *dev = obj->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- struct i915_vma *vma;
u64 start, end;
u64 min_alignment;
int ret;

- vma = ggtt_view ?
- i915_gem_obj_lookup_or_create_ggtt_vma(obj, ggtt_view) :
- i915_gem_obj_lookup_or_create_vma(obj, vm);
- if (IS_ERR(vma))
- return vma;
+ GEM_BUG_ON(vma->bound);
+ GEM_BUG_ON(drm_mm_node_allocated(&vma->node));

size = max(size, vma->size);
if (flags & PIN_MAPPABLE)
@@ -2779,7 +2774,7 @@ i915_gem_object_insert_into_vm(struct drm_i915_gem_object *obj,
if (alignment & (min_alignment - 1)) {
DRM_DEBUG("Invalid object alignment requested %llu, minimum %llu\n",
alignment, min_alignment);
- return ERR_PTR(-EINVAL);
+ return -EINVAL;
}

start = flags & PIN_OFFSET_BIAS ? flags & PIN_OFFSET_MASK : 0;
@@ -2799,12 +2794,12 @@ i915_gem_object_insert_into_vm(struct drm_i915_gem_object *obj,
size, obj->base.size,
flags & PIN_MAPPABLE ? "mappable" : "total",
end);
- return ERR_PTR(-E2BIG);
+ return -E2BIG;
}

ret = i915_gem_object_get_pages(obj);
if (ret)
- return ERR_PTR(ret);
+ return ret;

i915_gem_object_pin_pages(obj);

@@ -2866,13 +2861,13 @@ search_free:
list_move_tail(&vma->vm_link, &vma->vm->inactive_list);
obj->bind_count++;

- return vma;
+ return 0;

err_remove_node:
drm_mm_remove_node(&vma->node);
err_unpin:
i915_gem_object_unpin_pages(obj);
- return ERR_PTR(ret);
+ return ret;
}

bool
@@ -3435,6 +3430,9 @@ i915_vma_misplaced(struct i915_vma *vma,
{
struct drm_i915_gem_object *obj = vma->obj;

+ if (!drm_mm_node_allocated(&vma->node))
+ return false;
+
if (vma->node.size < size)
return true;

@@ -3478,94 +3476,45 @@ void __i915_vma_set_map_and_fenceable(struct i915_vma *vma)
obj->map_and_fenceable = mappable && fenceable;
}

-static int
-i915_gem_object_do_pin(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- const struct i915_ggtt_view *ggtt_view,
- uint64_t size,
- uint32_t alignment,
- uint64_t flags)
+int
+i915_vma_pin(struct i915_vma *vma,
+ uint64_t size,
+ uint64_t alignment,
+ uint64_t flags)
{
- struct drm_i915_private *dev_priv = obj->base.dev->dev_private;
- struct i915_vma *vma;
- unsigned bound;
+ unsigned bound = vma->bound;
int ret;

- if (WARN_ON(vm == &dev_priv->mm.aliasing_ppgtt->base))
- return -ENODEV;
-
- if (WARN_ON(flags & (PIN_GLOBAL | PIN_MAPPABLE) && !i915_is_ggtt(vm)))
- return -EINVAL;
-
- if (WARN_ON((flags & (PIN_MAPPABLE | PIN_GLOBAL)) == PIN_MAPPABLE))
- return -EINVAL;
+ GEM_BUG_ON((flags & (PIN_GLOBAL | PIN_USER)) == 0);
+ GEM_BUG_ON((flags & PIN_GLOBAL) && !vma->is_ggtt);

- if (WARN_ON(i915_is_ggtt(vm) != !!ggtt_view))
- return -EINVAL;
-
- vma = ggtt_view ? i915_gem_obj_to_ggtt_view(obj, ggtt_view) :
- i915_gem_obj_to_vma(obj, vm);
-
- if (IS_ERR(vma))
- return PTR_ERR(vma);
-
- if (vma) {
- if (WARN_ON(vma->pin_count == DRM_I915_GEM_OBJECT_MAX_PIN_COUNT))
- return -EBUSY;
-
- if (i915_vma_misplaced(vma, size, alignment, flags)) {
- WARN(vma->pin_count,
- "bo is already pinned in %s with incorrect alignment:"
- " offset=%08x %08x, req.alignment=%x, req.map_and_fenceable=%d,"
- " obj->map_and_fenceable=%d\n",
- ggtt_view ? "ggtt" : "ppgtt",
- upper_32_bits(vma->node.start),
- lower_32_bits(vma->node.start),
- alignment,
- !!(flags & PIN_MAPPABLE),
- obj->map_and_fenceable);
- ret = i915_vma_unbind(vma);
- if (ret)
- return ret;
+ if (WARN_ON(vma->pin_count == DRM_I915_GEM_OBJECT_MAX_PIN_COUNT))
+ return -EBUSY;

- vma = NULL;
- }
- }
+ /* Pin early to prevent the shrinker/eviction logic from destroying
+ * our vma as we insert and bind.
+ */
+ vma->pin_count++;

- if (vma == NULL || !drm_mm_node_allocated(&vma->node)) {
- vma = i915_gem_object_insert_into_vm(obj, vm, ggtt_view,
- size, alignment, flags);
- if (IS_ERR(vma))
- return PTR_ERR(vma);
+ if (!bound) {
+ ret = i915_vma_insert(vma, size, alignment, flags);
+ if (ret)
+ goto err;
}

- bound = vma->bound;
- ret = i915_vma_bind(vma, obj->cache_level, flags);
+ ret = i915_vma_bind(vma, vma->obj->cache_level, flags);
if (ret)
- return ret;
+ goto err;

- if (ggtt_view && ggtt_view->type == I915_GGTT_VIEW_NORMAL &&
- (bound ^ vma->bound) & GLOBAL_BIND) {
+ if ((bound ^ vma->bound) & GLOBAL_BIND)
__i915_vma_set_map_and_fenceable(vma);
- WARN_ON(flags & PIN_MAPPABLE && !obj->map_and_fenceable);
- }

GEM_BUG_ON(i915_vma_misplaced(vma, size, alignment, flags));
-
- vma->pin_count++;
return 0;
-}

-int
-i915_gem_object_pin(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- uint64_t size,
- uint32_t alignment,
- uint64_t flags)
-{
- return i915_gem_object_do_pin(obj, vm,
- i915_is_ggtt(vm) ? &i915_ggtt_view_normal : NULL,
- size, alignment, flags);
+err:
+ vma->pin_count--;
+ return ret;
}

int
@@ -3575,11 +3524,35 @@ i915_gem_object_ggtt_pin(struct drm_i915_gem_object *obj,
uint32_t alignment,
uint64_t flags)
{
+ struct i915_vma *vma;
+ int ret;
+
if (WARN_ONCE(!view, "no view specified"))
return -EINVAL;

- return i915_gem_object_do_pin(obj, i915_obj_to_ggtt(obj), view,
- size, alignment, flags | PIN_GLOBAL);
+ vma = i915_gem_obj_lookup_or_create_ggtt_vma(obj, view);
+ if (IS_ERR(vma))
+ return PTR_ERR(vma);
+
+ if (i915_vma_misplaced(vma, size, alignment, flags)) {
+ if (flags & PIN_NONBLOCK && (vma->pin_count | vma->active))
+ return -ENOSPC;
+
+ WARN(vma->pin_count,
+ "bo is already pinned in ggtt with incorrect alignment:"
+ " offset=%08x %08x, req.alignment=%x, req.map_and_fenceable=%d,"
+ " obj->map_and_fenceable=%d\n",
+ upper_32_bits(vma->node.start),
+ lower_32_bits(vma->node.start),
+ alignment,
+ !!(flags & PIN_MAPPABLE),
+ obj->map_and_fenceable);
+ ret = i915_vma_unbind(vma);
+ if (ret)
+ return ret;
+ }
+
+ return i915_vma_pin(vma, size, alignment, flags | PIN_GLOBAL);
}

void
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 899220139a8a..d4dcc3e5d080 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -44,11 +44,10 @@
struct i915_execbuffer_params {
struct drm_device *dev;
struct drm_file *file;
+ struct i915_vma *batch_vma;
uint32_t dispatch_flags;
uint32_t args_batch_start_offset;
- uint64_t batch_obj_vm_offset;
struct intel_engine_cs *ring;
- struct drm_i915_gem_object *batch_obj;
struct intel_context *ctx;
struct drm_i915_gem_request *request;
};
@@ -101,6 +100,26 @@ eb_reset(struct eb_vmas *eb)
memset(eb->buckets, 0, (eb->and+1)*sizeof(struct hlist_head));
}

+static struct i915_vma *
+eb_get_batch(struct eb_vmas *eb)
+{
+ struct i915_vma *vma = list_entry(eb->vmas.prev, typeof(*vma), exec_list);
+
+ /*
+ * SNA is doing fancy tricks with compressing batch buffers, which leads
+ * to negative relocation deltas. Usually that works out ok since the
+ * relocate address is still positive, except when the batch is placed
+ * very low in the GTT. Ensure this doesn't happen.
+ *
+ * Note that actual hangs have only been observed on gen7, but for
+ * paranoia do it everywhere.
+ */
+ if ((vma->exec_entry->flags & EXEC_OBJECT_PINNED) == 0)
+ vma->exec_entry->flags |= __EXEC_OBJECT_NEEDS_BIAS;
+
+ return vma;
+}
+
static int
eb_lookup_vmas(struct eb_vmas *eb,
struct drm_i915_gem_exec_object2 *exec,
@@ -642,16 +661,16 @@ i915_gem_execbuffer_reserve_vma(struct i915_vma *vma,
flags |= PIN_HIGH;
}

- ret = i915_gem_object_pin(obj, vma->vm,
- entry->pad_to_size,
- entry->alignment,
- flags);
- if ((ret == -ENOSPC || ret == -E2BIG) &&
+ ret = i915_vma_pin(vma,
+ entry->pad_to_size,
+ entry->alignment,
+ flags);
+ if ((ret == -ENOSPC || ret == -E2BIG) &&
only_mappable_for_reloc(entry->flags))
- ret = i915_gem_object_pin(obj, vma->vm,
- entry->pad_to_size,
- entry->alignment,
- flags & ~PIN_MAPPABLE);
+ ret = i915_vma_pin(vma,
+ entry->pad_to_size,
+ entry->alignment,
+ flags & ~PIN_MAPPABLE);
if (ret)
return ret;

@@ -1203,11 +1222,11 @@ i915_reset_gen7_sol_offsets(struct drm_i915_gem_request *req)
return 0;
}

-static struct drm_i915_gem_object*
+static struct i915_vma*
i915_gem_execbuffer_parse(struct intel_engine_cs *ring,
struct drm_i915_gem_exec_object2 *shadow_exec_entry,
- struct eb_vmas *eb,
struct drm_i915_gem_object *batch_obj,
+ struct eb_vmas *eb,
u32 batch_start_offset,
u32 batch_len,
bool is_master)
@@ -1219,7 +1238,7 @@ i915_gem_execbuffer_parse(struct intel_engine_cs *ring,
shadow_batch_obj = i915_gem_batch_pool_get(&ring->batch_pool,
PAGE_ALIGN(batch_len));
if (IS_ERR(shadow_batch_obj))
- return shadow_batch_obj;
+ return ERR_CAST(shadow_batch_obj);

ret = i915_parse_cmds(ring,
batch_obj,
@@ -1244,14 +1263,12 @@ i915_gem_execbuffer_parse(struct intel_engine_cs *ring,
drm_gem_object_reference(&shadow_batch_obj->base);
list_add_tail(&vma->exec_list, &eb->vmas);

- shadow_batch_obj->base.pending_read_domains = I915_GEM_DOMAIN_COMMAND;
-
- return shadow_batch_obj;
+ return vma;

err:
i915_gem_object_unpin_pages(shadow_batch_obj);
if (ret == -EACCES) /* unhandled chained batch */
- return batch_obj;
+ return NULL;
else
return ERR_PTR(ret);
}
@@ -1331,7 +1348,7 @@ execbuf_submit(struct i915_execbuffer_params *params,
}

exec_len = args->batch_len;
- exec_start = params->batch_obj_vm_offset +
+ exec_start = params->batch_vma->node.start +
params->args_batch_start_offset;

ret = params->ring->emit_bb_start(params->request,
@@ -1378,26 +1395,6 @@ static int gen8_dispatch_bsd_ring(struct drm_device *dev,
}
}

-static struct drm_i915_gem_object *
-eb_get_batch(struct eb_vmas *eb)
-{
- struct i915_vma *vma = list_entry(eb->vmas.prev, typeof(*vma), exec_list);
-
- /*
- * SNA is doing fancy tricks with compressing batch buffers, which leads
- * to negative relocation deltas. Usually that works out ok since the
- * relocate address is still positive, except when the batch is placed
- * very low in the GTT. Ensure this doesn't happen.
- *
- * Note that actual hangs have only been observed on gen7, but for
- * paranoia do it everywhere.
- */
- if ((vma->exec_entry->flags & EXEC_OBJECT_PINNED) == 0)
- vma->exec_entry->flags |= __EXEC_OBJECT_NEEDS_BIAS;
-
- return vma->obj;
-}
-
static int
i915_gem_do_execbuffer(struct drm_device *dev, void *data,
struct drm_file *file,
@@ -1406,7 +1403,6 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
{
struct drm_i915_private *dev_priv = dev->dev_private;
struct eb_vmas *eb;
- struct drm_i915_gem_object *batch_obj;
struct drm_i915_gem_exec_object2 shadow_exec_entry;
struct intel_engine_cs *ring;
struct intel_context *ctx;
@@ -1542,7 +1538,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
goto err;

/* take note of the batch buffer before we might reorder the lists */
- batch_obj = eb_get_batch(eb);
+ params->batch_vma = eb_get_batch(eb);

/* Move the objects en-masse into the GTT, evicting if necessary. */
need_relocs = (args->flags & I915_EXEC_NO_RELOC) == 0;
@@ -1564,7 +1560,7 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
}

/* Set the pending read domains for the batch buffer to COMMAND */
- if (batch_obj->base.pending_write_domain) {
+ if (params->batch_vma->obj->base.pending_write_domain) {
DRM_DEBUG("Attempting to use self-modifying batch buffer\n");
ret = -EINVAL;
goto err;
@@ -1572,26 +1568,20 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,

params->args_batch_start_offset = args->batch_start_offset;
if (i915_needs_cmd_parser(ring) && args->batch_len) {
- struct drm_i915_gem_object *parsed_batch_obj;
-
- parsed_batch_obj = i915_gem_execbuffer_parse(ring,
- &shadow_exec_entry,
- eb,
- batch_obj,
- args->batch_start_offset,
- args->batch_len,
- file->is_master);
- if (IS_ERR(parsed_batch_obj)) {
- ret = PTR_ERR(parsed_batch_obj);
+ struct i915_vma *vma;
+
+ vma = i915_gem_execbuffer_parse(ring, &shadow_exec_entry,
+ params->batch_vma->obj,
+ eb,
+ args->batch_start_offset,
+ args->batch_len,
+ file->is_master);
+ if (IS_ERR(vma)) {
+ ret = PTR_ERR(vma);
goto err;
}

- /*
- * parsed_batch_obj == batch_obj means batch not fully parsed:
- * Accept, but don't promote to secure.
- */
-
- if (parsed_batch_obj != batch_obj) {
+ if (vma) {
/*
* Batch parsed and accepted:
*
@@ -1603,16 +1593,18 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
*/
dispatch_flags |= I915_DISPATCH_SECURE;
params->args_batch_start_offset = 0;
- batch_obj = parsed_batch_obj;
+ params->batch_vma = vma;
}
}

- batch_obj->base.pending_read_domains |= I915_GEM_DOMAIN_COMMAND;
+ params->batch_vma->obj->base.pending_read_domains |= I915_GEM_DOMAIN_COMMAND;

/* snb/ivb/vlv conflate the "batch in ppgtt" bit with the "non-secure
* batch" bit. Hence we need to pin secure batches into the global gtt.
* hsw should have this fixed, but bdw mucks it up again. */
if (dispatch_flags & I915_DISPATCH_SECURE) {
+ struct drm_i915_gem_object *obj = params->batch_vma->obj;
+
/*
* So on first glance it looks freaky that we pin the batch here
* outside of the reservation loop. But:
@@ -1623,13 +1615,12 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
* fitting due to fragmentation.
* So this is actually safe.
*/
- ret = i915_gem_obj_ggtt_pin(batch_obj, 0, 0);
+ ret = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, 0);
if (ret)
goto err;

- params->batch_obj_vm_offset = i915_gem_obj_ggtt_offset(batch_obj);
- } else
- params->batch_obj_vm_offset = i915_gem_obj_offset(batch_obj, vm);
+ params->batch_vma = i915_gem_obj_to_ggtt(obj);
+ }

/* Allocate a request for this batch buffer nice and early. */
params->request = i915_gem_request_alloc(ring, ctx);
@@ -1654,11 +1645,10 @@ i915_gem_do_execbuffer(struct drm_device *dev, void *data,
params->file = file;
params->ring = ring;
params->dispatch_flags = dispatch_flags;
- params->batch_obj = batch_obj;
params->ctx = ctx;

ret = execbuf_submit(params, args, &eb->vmas);
- __i915_add_request(params->request, params->batch_obj, ret == 0);
+ __i915_add_request(params->request, params->batch_vma->obj, ret == 0);

err_batch_unpin:
/*
@@ -1668,8 +1658,7 @@ err_batch_unpin:
* active.
*/
if (dispatch_flags & I915_DISPATCH_SECURE)
- i915_gem_object_ggtt_unpin(batch_obj);
-
+ i915_vma_unpin(params->batch_vma);
err:
/* the request owns the ref now */
i915_gem_context_unreference(ctx);
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 98b9730f4066..8f3b2f051918 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -3549,13 +3549,10 @@ int i915_vma_bind(struct i915_vma *vma, enum i915_cache_level cache_level,
return 0;

if (vma->bound == 0 && vma->vm->allocate_va_range) {
- /* XXX: i915_vma_pin() will fix this +- hack */
- vma->pin_count++;
trace_i915_va_alloc(vma);
ret = vma->vm->allocate_va_range(vma->vm,
vma->node.start,
vma->node.size);
- vma->pin_count--;
if (ret)
return ret;
}
--
2.7.0.rc3
Chris Wilson
2016-01-11 10:44:50 UTC
Permalink
Split the insertion into the address space's range manager and binding
of that object into the GTT to simplify the code flow when pinning a
VMA.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem.c | 33 +++++++++++++++------------------
1 file changed, 15 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index 2f14d2da75a5..9c159e64a9a0 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2743,12 +2743,12 @@ static bool i915_gem_valid_gtt_space(struct i915_vma *vma,
* there.
*/
static struct i915_vma *
-i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,
- struct i915_address_space *vm,
- const struct i915_ggtt_view *ggtt_view,
- uint64_t size,
- unsigned alignment,
- uint64_t flags)
+i915_gem_object_insert_into_vm(struct drm_i915_gem_object *obj,
+ struct i915_address_space *vm,
+ const struct i915_ggtt_view *ggtt_view,
+ uint64_t size,
+ unsigned alignment,
+ uint64_t flags)
{
struct drm_device *dev = obj->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
@@ -2877,11 +2877,6 @@ search_free:
goto err_remove_node;
}

- trace_i915_vma_bind(vma, flags);
- ret = i915_vma_bind(vma, obj->cache_level, flags);
- if (ret)
- goto err_remove_node;
-
list_move_tail(&obj->global_list, &dev_priv->mm.bound_list);
list_move_tail(&vma->vm_link, &vm->inactive_list);
obj->bind_count++;
@@ -3554,24 +3549,26 @@ i915_gem_object_do_pin(struct drm_i915_gem_object *obj,
}
}

- bound = vma ? vma->bound : 0;
if (vma == NULL || !drm_mm_node_allocated(&vma->node)) {
- vma = i915_gem_object_bind_to_vm(obj, vm, ggtt_view,
- size, alignment, flags);
+ vma = i915_gem_object_insert_into_vm(obj, vm, ggtt_view,
+ size, alignment, flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
- } else {
- ret = i915_vma_bind(vma, obj->cache_level, flags);
- if (ret)
- return ret;
}

+ bound = vma->bound;
+ ret = i915_vma_bind(vma, obj->cache_level, flags);
+ if (ret)
+ return ret;
+
if (ggtt_view && ggtt_view->type == I915_GGTT_VIEW_NORMAL &&
(bound ^ vma->bound) & GLOBAL_BIND) {
__i915_vma_set_map_and_fenceable(vma);
WARN_ON(flags & PIN_MAPPABLE && !obj->map_and_fenceable);
}

+ GEM_BUG_ON(i915_vma_misplaced(vma, size, alignment, flags));
+
vma->pin_count++;
return 0;
}
--
2.7.0.rc3
Chris Wilson
2016-01-11 10:44:49 UTC
Permalink
Our GPUs impose certain requirements upon buffers that depend upon how
exactly they are used. Typically this is expressed as that they require
a larger surface than would be naively computed by pitch * height.
Normally such requirements are hidden away in the userspace driver, but
when we accept pointers from strangers and later impose extra conditions
on them, the original client allocator has no idea about the
monstrosities in the GPU and we require the userspace driver to inform
the kernel how many padding pages are required beyond the client
allocation.

v2: Long time, no see
v3: Try an anonymous union for uapi struct compatability

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <***@intel.com>
Reviewed-by: Tvrtko Ursulin <***@intel.com>
---
drivers/gpu/drm/i915/i915_drv.h | 6 ++-
drivers/gpu/drm/i915/i915_gem.c | 79 +++++++++++++++---------------
drivers/gpu/drm/i915/i915_gem_execbuffer.c | 16 +++++-
include/uapi/drm/i915_drm.h | 8 ++-
4 files changed, 64 insertions(+), 45 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 4ada625b751e..49b126e4191e 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -2694,11 +2694,13 @@ void i915_gem_free_object(struct drm_gem_object *obj);
int __must_check
i915_gem_object_pin(struct drm_i915_gem_object *obj,
struct i915_address_space *vm,
+ uint64_t size,
uint32_t alignment,
uint64_t flags);
int __must_check
i915_gem_object_ggtt_pin(struct drm_i915_gem_object *obj,
const struct i915_ggtt_view *view,
+ uint64_t size,
uint32_t alignment,
uint64_t flags);

@@ -2931,8 +2933,8 @@ i915_gem_obj_ggtt_pin(struct drm_i915_gem_object *obj,
uint32_t alignment,
unsigned flags)
{
- return i915_gem_object_pin(obj, i915_obj_to_ggtt(obj),
- alignment, flags | PIN_GLOBAL);
+ return i915_gem_object_pin(obj, i915_obj_to_ggtt(obj), 0, alignment,
+ flags | PIN_GLOBAL);
}

static inline int
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index a82a06a61262..2f14d2da75a5 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -1440,7 +1440,7 @@ int i915_gem_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
}

/* Now pin it into the GTT if needed */
- ret = i915_gem_object_ggtt_pin(obj, &view, 0, PIN_MAPPABLE);
+ ret = i915_gem_object_ggtt_pin(obj, &view, 0, 0, PIN_MAPPABLE);
if (ret)
goto unlock;

@@ -2746,20 +2746,20 @@ static struct i915_vma *
i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,
struct i915_address_space *vm,
const struct i915_ggtt_view *ggtt_view,
+ uint64_t size,
unsigned alignment,
uint64_t flags)
{
struct drm_device *dev = obj->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- u32 fence_alignment, unfenced_alignment;
- u32 search_flag, alloc_flag;
u64 start, end;
- u64 size, fence_size;
+ u32 search_flag, alloc_flag;
struct i915_vma *vma;
int ret;

if (i915_is_ggtt(vm)) {
- u32 view_size;
+ u32 fence_size, fence_alignment, unfenced_alignment;
+ u64 view_size;

if (WARN_ON(!ggtt_view))
return ERR_PTR(-EINVAL);
@@ -2777,21 +2777,22 @@ i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,
view_size,
obj->tiling_mode,
false);
- size = flags & PIN_MAPPABLE ? fence_size : view_size;
+ size = max(size, view_size);
+ if (flags & PIN_MAPPABLE)
+ size = max_t(u64, size, fence_size);
+
+ if (alignment == 0)
+ alignment = flags & PIN_MAPPABLE ? fence_alignment :
+ unfenced_alignment;
+ if (flags & PIN_MAPPABLE && alignment & (fence_alignment - 1)) {
+ DRM_DEBUG("Invalid object (view type=%u) alignment requested %u\n",
+ ggtt_view ? ggtt_view->type : 0,
+ alignment);
+ return ERR_PTR(-EINVAL);
+ }
} else {
- fence_size = i915_gem_get_gtt_size(dev,
- obj->base.size,
- obj->tiling_mode);
- fence_alignment = i915_gem_get_gtt_alignment(dev,
- obj->base.size,
- obj->tiling_mode,
- true);
- unfenced_alignment =
- i915_gem_get_gtt_alignment(dev,
- obj->base.size,
- obj->tiling_mode,
- false);
- size = flags & PIN_MAPPABLE ? fence_size : obj->base.size;
+ size = max_t(u64, size, obj->base.size);
+ alignment = 4096;
}

start = flags & PIN_OFFSET_BIAS ? flags & PIN_OFFSET_MASK : 0;
@@ -2801,24 +2802,14 @@ i915_gem_object_bind_to_vm(struct drm_i915_gem_object *obj,
if (flags & PIN_ZONE_4G)
end = min_t(u64, end, (1ULL << 32));

- if (alignment == 0)
- alignment = flags & PIN_MAPPABLE ? fence_alignment :
- unfenced_alignment;
- if (flags & PIN_MAPPABLE && alignment & (fence_alignment - 1)) {
- DRM_DEBUG("Invalid object (view type=%u) alignment requested %u\n",
- ggtt_view ? ggtt_view->type : 0,
- alignment);
- return ERR_PTR(-EINVAL);
- }
-
/* If binding the object/GGTT view requires more space than the entire
* aperture has, reject it early before evicting everything in a vain
* attempt to find space.
*/
if (size > end) {
- DRM_DEBUG("Attempting to bind an object (view type=%u) larger than the aperture: size=%llu > %s aperture=%llu\n",
+ DRM_DEBUG("Attempting to bind an object (view type=%u) larger than the aperture: request=%llu [object=%zd] > %s aperture=%llu\n",
ggtt_view ? ggtt_view->type : 0,
- size,
+ size, obj->base.size,
flags & PIN_MAPPABLE ? "mappable" : "total",
end);
return ERR_PTR(-E2BIG);
@@ -3309,7 +3300,7 @@ i915_gem_object_pin_to_display_plane(struct drm_i915_gem_object *obj,
* (e.g. libkms for the bootup splash), we have to ensure that we
* always use map_and_fenceable for all scanout buffers.
*/
- ret = i915_gem_object_ggtt_pin(obj, view, alignment,
+ ret = i915_gem_object_ggtt_pin(obj, view, 0, alignment,
view->type == I915_GGTT_VIEW_NORMAL ?
PIN_MAPPABLE : 0);
if (ret)
@@ -3459,12 +3450,17 @@ i915_gem_ring_throttle(struct drm_device *dev, struct drm_file *file)
}

static bool
-i915_vma_misplaced(struct i915_vma *vma, uint32_t alignment, uint64_t flags)
+i915_vma_misplaced(struct i915_vma *vma,
+ uint64_t size,
+ uint32_t alignment,
+ uint64_t flags)
{
struct drm_i915_gem_object *obj = vma->obj;

- if (alignment &&
- vma->node.start & (alignment - 1))
+ if (vma->node.size < size)
+ return true;
+
+ if (alignment && vma->node.start & (alignment - 1))
return true;

if (flags & PIN_MAPPABLE && !obj->map_and_fenceable)
@@ -3508,6 +3504,7 @@ static int
i915_gem_object_do_pin(struct drm_i915_gem_object *obj,
struct i915_address_space *vm,
const struct i915_ggtt_view *ggtt_view,
+ uint64_t size,
uint32_t alignment,
uint64_t flags)
{
@@ -3538,7 +3535,7 @@ i915_gem_object_do_pin(struct drm_i915_gem_object *obj,
if (WARN_ON(vma->pin_count == DRM_I915_GEM_OBJECT_MAX_PIN_COUNT))
return -EBUSY;

- if (i915_vma_misplaced(vma, alignment, flags)) {
+ if (i915_vma_misplaced(vma, size, alignment, flags)) {
WARN(vma->pin_count,
"bo is already pinned in %s with incorrect alignment:"
" offset=%08x %08x, req.alignment=%x, req.map_and_fenceable=%d,"
@@ -3559,8 +3556,8 @@ i915_gem_object_do_pin(struct drm_i915_gem_object *obj,

bound = vma ? vma->bound : 0;
if (vma == NULL || !drm_mm_node_allocated(&vma->node)) {
- vma = i915_gem_object_bind_to_vm(obj, vm, ggtt_view, alignment,
- flags);
+ vma = i915_gem_object_bind_to_vm(obj, vm, ggtt_view,
+ size, alignment, flags);
if (IS_ERR(vma))
return PTR_ERR(vma);
} else {
@@ -3582,17 +3579,19 @@ i915_gem_object_do_pin(struct drm_i915_gem_object *obj,
int
i915_gem_object_pin(struct drm_i915_gem_object *obj,
struct i915_address_space *vm,
+ uint64_t size,
uint32_t alignment,
uint64_t flags)
{
return i915_gem_object_do_pin(obj, vm,
i915_is_ggtt(vm) ? &i915_ggtt_view_normal : NULL,
- alignment, flags);
+ size, alignment, flags);
}

int
i915_gem_object_ggtt_pin(struct drm_i915_gem_object *obj,
const struct i915_ggtt_view *view,
+ uint64_t size,
uint32_t alignment,
uint64_t flags)
{
@@ -3600,7 +3599,7 @@ i915_gem_object_ggtt_pin(struct drm_i915_gem_object *obj,
return -EINVAL;

return i915_gem_object_do_pin(obj, i915_obj_to_ggtt(obj), view,
- alignment, flags | PIN_GLOBAL);
+ size, alignment, flags | PIN_GLOBAL);
}

void
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index d88be1d3cb86..899220139a8a 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -642,10 +642,14 @@ i915_gem_execbuffer_reserve_vma(struct i915_vma *vma,
flags |= PIN_HIGH;
}

- ret = i915_gem_object_pin(obj, vma->vm, entry->alignment, flags);
+ ret = i915_gem_object_pin(obj, vma->vm,
+ entry->pad_to_size,
+ entry->alignment,
+ flags);
if ((ret == -ENOSPC || ret == -E2BIG) &&
only_mappable_for_reloc(entry->flags))
ret = i915_gem_object_pin(obj, vma->vm,
+ entry->pad_to_size,
entry->alignment,
flags & ~PIN_MAPPABLE);
if (ret)
@@ -708,6 +712,9 @@ eb_vma_misplaced(struct i915_vma *vma)
vma->node.start & (entry->alignment - 1))
return true;

+ if (vma->node.size < entry->pad_to_size)
+ return true;
+
if (entry->flags & EXEC_OBJECT_PINNED &&
vma->node.start != entry->offset)
return true;
@@ -1044,6 +1051,13 @@ validate_exec_list(struct drm_device *dev,
if (exec[i].alignment && !is_power_of_2(exec[i].alignment))
return -EINVAL;

+ /* pad_to_size was once a reserved field, so sanitize it */
+ if (exec[i].flags & EXEC_OBJECT_PAD_TO_SIZE) {
+ if (offset_in_page(exec[i].pad_to_size))
+ return -EINVAL;
+ } else
+ exec[i].pad_to_size = 0;
+
/* First check for malicious input causing overflow in
* the worst case where we need to allocate the entire
* relocation tree as a single array.
diff --git a/include/uapi/drm/i915_drm.h b/include/uapi/drm/i915_drm.h
index 7fee4416dcc7..ff7b438059da 100644
--- a/include/uapi/drm/i915_drm.h
+++ b/include/uapi/drm/i915_drm.h
@@ -697,10 +697,14 @@ struct drm_i915_gem_exec_object2 {
#define EXEC_OBJECT_WRITE (1<<2)
#define EXEC_OBJECT_SUPPORTS_48B_ADDRESS (1<<3)
#define EXEC_OBJECT_PINNED (1<<4)
-#define __EXEC_OBJECT_UNKNOWN_FLAGS -(EXEC_OBJECT_PINNED<<1)
+#define EXEC_OBJECT_PAD_TO_SIZE (1<<5)
+#define __EXEC_OBJECT_UNKNOWN_FLAGS -(EXEC_OBJECT_PAD_TO_SIZE<<1)
__u64 flags;

- __u64 rsvd1;
+ union {
+ __u64 rsvd1;
+ __u64 pad_to_size;
+ };
__u64 rsvd2;
};
--
2.7.0.rc3
Chris Wilson
2016-01-11 10:44:41 UTC
Permalink
As we inspect obj->active to decide how many objects we can shrink (we
only shrink idle objects), it helps to flush the active lists first
in order to have a more accurate count of available objects.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem_shrinker.c | 2 ++
1 file changed, 2 insertions(+)

diff --git a/drivers/gpu/drm/i915/i915_gem_shrinker.c b/drivers/gpu/drm/i915/i915_gem_shrinker.c
index e15fc7531f08..67f3eb9a8391 100644
--- a/drivers/gpu/drm/i915/i915_gem_shrinker.c
+++ b/drivers/gpu/drm/i915/i915_gem_shrinker.c
@@ -225,6 +225,8 @@ i915_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc)
if (!i915_gem_shrinker_lock(dev, &unlock))
return 0;

+ i915_gem_retire_requests(dev);
+
count = 0;
list_for_each_entry(obj, &dev_priv->mm.unbound_list, global_list)
if (obj->pages_pin_count == 0)
--
2.7.0.rc3
Chris Wilson
2016-01-11 10:44:58 UTC
Permalink
With a bit of care (and leniency) we can iterate over the object and
wait for previous rendering to complete with judicial use of atomic
reference counting. The ABI requires us to ensure that an active object
is eventually flushed (like the busy-ioctl) which is guaranteed by our
management of requests (i.e. everything that is submitted to hardware is
flushed in the same request). All we have to do is ensure that we can
detect when the requests are complete for reporting when the object is
idle (without triggering ETIME) - this is handled by
__i915_wait_request.

The biggest danger in the code is walking the object without holding any
locks. We iterate over the set of last requests and carefully grab a
reference upon it. (If it is changing beneath us, that is the usual
userspace race and even with locking you get the same indeterminate
results.) If the request is unreferenced beneath us, it will be disposed
of into the request cache - so we have to carefully order the retrieval
of the request pointer with its removal, and to do this we employ RCU on
the request cache and upon the last_request pointer tracking.

The impact of this is actually quite small - the return to userspace
following the wait was already lockless. What we achieve here is
completing an already finished wait without hitting the struct_mutex,
our hold is quite short and so we are typically just a victim of
contention rather than a cause.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem.c | 52 +++++++++++++++--------------------------
1 file changed, 19 insertions(+), 33 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index ee715558ecea..f30207596ec6 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -2440,54 +2440,40 @@ i915_gem_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file)
{
struct drm_i915_gem_wait *args = data;
struct drm_i915_gem_object *obj;
- struct drm_i915_gem_request *req[I915_NUM_RINGS];
- int i, n = 0;
- int ret;
+ int i, ret = 0;

if (args->flags != 0)
return -EINVAL;

- ret = i915_mutex_lock_interruptible(dev);
- if (ret)
- return ret;
-
obj = to_intel_bo(drm_gem_object_lookup(dev, file, args->bo_handle));
- if (&obj->base == NULL) {
- mutex_unlock(&dev->struct_mutex);
+ if (&obj->base == NULL)
return -ENOENT;
- }

- /* Need to make sure the object gets inactive eventually. */
- i915_gem_object_flush_active(obj);
- if (!i915_gem_object_is_active(obj))
+ if (!__I915_BO_ACTIVE(obj))
goto out;

- /* Do this after OLR check to make sure we make forward progress polling
- * on this IOCTL with a timeout == 0 (like busy ioctl)
- */
- if (args->timeout_ns == 0) {
- ret = -ETIME;
- goto out;
- }
-
+ rcu_read_lock();
for (i = 0; i < I915_NUM_RINGS; i++) {
- if (obj->last_read[i].request == NULL)
+ struct drm_i915_gem_request *req;
+
+ req = i915_gem_active_get_request_rcu(&obj->last_read[i]);
+ if (req == NULL)
continue;

- req[n++] = i915_gem_request_get(obj->last_read[i].request);
+ rcu_read_unlock();
+ ret = __i915_wait_request(req, true,
+ args->timeout_ns >= 0 ? &args->timeout_ns : NULL,
+ to_rps_client(file));
+ i915_gem_request_put(req);
+ if (ret)
+ goto out;
+
+ rcu_read_lock();
}
+ rcu_read_unlock();

out:
- drm_gem_object_unreference(&obj->base);
- mutex_unlock(&dev->struct_mutex);
-
- for (i = 0; i < n; i++) {
- if (ret == 0)
- ret = __i915_wait_request(req[i], true,
- args->timeout_ns > 0 ? &args->timeout_ns : NULL,
- to_rps_client(file));
- i915_gem_request_put(req[i]);
- }
+ drm_gem_object_unreference_unlocked(&obj->base);
return ret;
}
--
2.7.0.rc3
Chris Wilson
2016-01-11 10:44:45 UTC
Permalink
We only want to retire requests if we have an existing object that
conflicts with the fresh userptr range in order to avoid unnecessary
work during creation of every userptr.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_gem_userptr.c | 20 +++++++++++++-------
1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_gem_userptr.c b/drivers/gpu/drm/i915/i915_gem_userptr.c
index a90392246471..2f922392bd10 100644
--- a/drivers/gpu/drm/i915/i915_gem_userptr.c
+++ b/drivers/gpu/drm/i915/i915_gem_userptr.c
@@ -187,17 +187,23 @@ i915_mmu_notifier_add(struct drm_device *dev,
* using an interrupt timer is likely to get stuck in an EINTR loop).
*/
mutex_lock(&dev->struct_mutex);
-
- /* Make sure we drop the final active reference (and thereby
- * remove the objects from the interval tree) before we do
- * the check for overlapping objects.
- */
- i915_gem_retire_requests(dev);
-
spin_lock(&mn->lock);
it = interval_tree_iter_first(&mn->objects,
mo->it.start, mo->it.last);
if (it) {
+ spin_unlock(&mn->lock);
+
+ /* Make sure we drop the final active reference (and thereby
+ * remove the objects from the interval tree) before we do
+ * the check for overlapping objects.
+ */
+ i915_gem_retire_requests(dev);
+
+ spin_lock(&mn->lock);
+ it = interval_tree_iter_first(&mn->objects,
+ mo->it.start, mo->it.last);
+ }
+ if (it) {
struct drm_i915_gem_object *obj;

/* We only need to check the first object in the range as it
--
2.7.0.rc3
Chris Wilson
2016-01-11 10:44:34 UTC
Permalink
Refactor pinning and unpinning of contexts, such that the default
context for an engine is pinned during initialisation and unpinned
during teardown (pinning of the context handles the reference counting).
Thus we can eliminate the special case handling of the default context
that was required to mask that it was not being pinned normally.

Signed-off-by: Chris Wilson <***@chris-wilson.co.uk>
---
drivers/gpu/drm/i915/i915_debugfs.c | 7 +-
drivers/gpu/drm/i915/i915_gem_request.c | 6 +-
drivers/gpu/drm/i915/intel_lrc.c | 117 +++++++++++++-------------------
drivers/gpu/drm/i915/intel_lrc.h | 3 +-
4 files changed, 53 insertions(+), 80 deletions(-)

diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index a5ea90944bbb..ea5b9f6d0fc9 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -2052,11 +2052,8 @@ static int i915_dump_lrc(struct seq_file *m, void *unused)
return ret;

list_for_each_entry(ctx, &dev_priv->context_list, link) {
- for_each_ring(ring, dev_priv, i) {
- if (ring->default_context != ctx)
- i915_dump_lrc_obj(m, ring,
- ctx->engine[i].state);
- }
+ for_each_ring(ring, dev_priv, i)
+ i915_dump_lrc_obj(m, ring, ctx->engine[i].state);
}

mutex_unlock(&dev->struct_mutex);
diff --git a/drivers/gpu/drm/i915/i915_gem_request.c b/drivers/gpu/drm/i915/i915_gem_request.c
index 069c0b9dfd95..61be8dda4a14 100644
--- a/drivers/gpu/drm/i915/i915_gem_request.c
+++ b/drivers/gpu/drm/i915/i915_gem_request.c
@@ -345,10 +345,8 @@ static void __i915_gem_request_retire_active(struct drm_i915_gem_request *req)
void i915_gem_request_cancel(struct drm_i915_gem_request *req)
{
intel_ring_reserved_space_cancel(req->ring);
- if (i915.enable_execlists) {
- if (req->ctx != req->engine->default_context)
- intel_lr_context_unpin(req);
- }
+ if (i915.enable_execlists)
+ intel_lr_context_unpin(req->ctx, req->engine);

/* If a request is to be discarded after actions have been queued upon
* it, we cannot unwind that request and it must be submitted rather
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 80b346a3fd8a..31fbb482d15c 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -227,7 +227,8 @@ enum {
#define GEN8_CTX_ID_SHIFT 32
#define CTX_RCS_INDIRECT_CTX_OFFSET_DEFAULT 0x17

-static int intel_lr_context_pin(struct drm_i915_gem_request *rq);
+static int intel_lr_context_pin(struct intel_context *ctx,
+ struct intel_engine_cs *engine);
static void lrc_setup_hardware_status_page(struct intel_engine_cs *ring,
struct drm_i915_gem_object *default_ctx_obj);

@@ -485,11 +486,9 @@ int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request

request->ring = request->ctx->engine[request->engine->id].ring;

- if (request->ctx != request->engine->default_context) {
- ret = intel_lr_context_pin(request);
- if (ret)
- return ret;
- }
+ ret = intel_lr_context_pin(request->ctx, request->engine);
+ if (ret)
+ return ret;

if (i915.enable_guc_submission) {
/*
@@ -521,13 +520,7 @@ bool intel_execlists_retire_requests(struct intel_engine_cs *ring)
spin_unlock(&ring->execlist_lock);

list_for_each_entry_safe(req, tmp, &list, execlist_link) {
- struct intel_context *ctx = req->ctx;
- struct drm_i915_gem_object *ctx_obj =
- ctx->engine[ring->id].state;
-
- if (ctx_obj && (ctx != ring->default_context))
- intel_lr_context_unpin(req);
-
+ intel_lr_context_unpin(req->ctx, req->engine);
i915_gem_request_put(req);
}

@@ -557,83 +550,73 @@ void intel_logical_ring_stop(struct intel_engine_cs *ring)
I915_WRITE_MODE(ring, _MASKED_BIT_DISABLE(STOP_RING));
}

-static int intel_lr_context_do_pin(struct intel_engine_cs *ring,
- struct drm_i915_gem_object *ctx_obj,
- struct intel_ring *ringbuf)
+static int intel_lr_context_pin(struct intel_context *ctx,
+ struct intel_engine_cs *engine)
{
- struct drm_i915_private *dev_priv = ring->i915;
+ struct drm_i915_private *dev_priv = engine->i915;
+ struct drm_i915_gem_object *ctx_obj;
+ struct intel_ring *ring;
u32 ggtt_offset;
int ret = 0;

- WARN_ON(!mutex_is_locked(&ring->dev->struct_mutex));
+ if (ctx->engine[engine->id].pin_count++)
+ return 0;
+
+ lockdep_assert_held(&engine->dev->struct_mutex);
+
+ ctx_obj = ctx->engine[engine->id].state;
ret = i915_gem_obj_ggtt_pin(ctx_obj, GEN8_LR_CONTEXT_ALIGN,
PIN_OFFSET_BIAS | GUC_WOPCM_TOP);
if (ret)
- return ret;
+ goto err;

- ret = intel_ring_map(ringbuf);
+ ring = ctx->engine[engine->id].ring;
+ ret = intel_ring_map(ring);
if (ret)
goto unpin_ctx_obj;

+ i915_gem_context_reference(ctx);
ctx_obj->dirty = true;

ggtt_offset =
i915_gem_obj_ggtt_offset(ctx_obj) + LRC_PPHWSP_PN * PAGE_SIZE;
- ringbuf->context_descriptor =
- ggtt_offset | ring->execlist_context_descriptor;
+ ring->context_descriptor =
+ ggtt_offset | engine->execlist_context_descriptor;

- ringbuf->registers =
+ ring->registers =
kmap(i915_gem_object_get_dirty_page(ctx_obj, LRC_STATE_PN));
- ringbuf->registers[CTX_RING_BUFFER_START+1] =
- i915_gem_obj_ggtt_offset(ringbuf->obj);
+ ring->registers[CTX_RING_BUFFER_START+1] =
+ i915_gem_obj_ggtt_offset(ring->obj);

/* Invalidate GuC TLB. */
if (i915.enable_guc_submission)
I915_WRITE(GEN8_GTCR, GEN8_GTCR_INVALIDATE);

- return ret;
+ return 0;

unpin_ctx_obj:
i915_gem_object_ggtt_unpin(ctx_obj);
-
+err:
+ ctx->engine[engine->id].pin_count = 0;
return ret;
}

-static int intel_lr_context_pin(struct drm_i915_gem_request *rq)
-{
- int engine = rq->engine->id;
- int ret;
-
- if (rq->ctx->engine[engine].pin_count++)
- return 0;
-
- ret = intel_lr_context_do_pin(rq->engine,
- rq->ctx->engine[engine].state,
- rq->ring);
- if (ret) {
- rq->ctx->engine[engine].pin_count = 0;
- return ret;
- }
-
- i915_gem_context_reference(rq->ctx);
- return 0;
-}
-
-void intel_lr_context_unpin(struct drm_i915_gem_request *rq)
+void intel_lr_context_unpin(struct intel_context *ctx,
+ struct intel_engine_cs *engine)
{
struct drm_i915_gem_object *ctx_obj;
- int engine = rq->engine->id;

- WARN_ON(!mutex_is_locked(&rq->i915->dev->struct_mutex));
- if (--rq->ctx->engine[engine].pin_count)
+ lockdep_assert_held(&engine->dev->struct_mutex);
+ if (--ctx->engine[engine->id].pin_count)
return;

- intel_ring_unmap(rq->ring);
+ intel_ring_unmap(ctx->engine[engine->id].ring);

- ctx_obj = rq->ctx->engine[engine].state;
+ ctx_obj = ctx->engine[engine->id].state;
kunmap(i915_gem_object_get_page(ctx_obj, LRC_STATE_PN));
i915_gem_object_ggtt_unpin(ctx_obj);
- i915_gem_context_unreference(rq->ctx);
+
+ i915_gem_context_unreference(ctx);
}

static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
@@ -1425,6 +1408,7 @@ void intel_logical_ring_cleanup(struct intel_engine_cs *ring)
kunmap(sg_page(ring->status_page.obj->pages->sgl));
ring->status_page.obj = NULL;
}
+ intel_lr_context_unpin(ring->default_context, ring);

lrc_destroy_wa_ctx_obj(ring);
ring->dev = NULL;
@@ -1433,6 +1417,7 @@ void intel_logical_ring_cleanup(struct intel_engine_cs *ring)
static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *ring)
{
struct drm_i915_private *dev_priv = to_i915(dev);
+ struct intel_context *ctx;
struct task_struct *task;
int ret;

@@ -1457,19 +1442,17 @@ static int logical_ring_init(struct drm_device *dev, struct intel_engine_cs *rin
if (ret)
goto error;

- ret = intel_lr_context_deferred_alloc(ring->default_context, ring);
+ ctx = ring->default_context;
+
+ ret = intel_lr_context_deferred_alloc(ctx, ring);
if (ret)
goto error;

/* As this is the default context, always pin it */
- ret = intel_lr_context_do_pin(
- ring,
- ring->default_context->engine[ring->id].state,
- ring->default_context->engine[ring->id].ring);
+ ret = intel_lr_context_pin(ctx, ring);
if (ret) {
- DRM_ERROR(
- "Failed to pin and map ringbuffer %s: %d\n",
- ring->name, ret);
+ DRM_ERROR("Failed to pin context for %s: %d\n",
+ ring->name, ret);
goto error;
}

@@ -1872,15 +1855,9 @@ void intel_lr_context_free(struct intel_context *ctx)
struct drm_i915_gem_object *ctx_obj = ctx->engine[i].state;

if (ctx_obj) {
- struct intel_ring *ring = ctx->engine[i].ring;
- struct intel_engine_cs *engine = ring->engine;
+ WARN_ON(ctx->engine[i].pin_count);

- if (ctx == engine->default_context) {
- intel_ring_unmap(ring);
- i915_gem_object_ggtt_unpin(ctx_obj);
- }
- WARN_ON(ctx->engine[engine->id].pin_count);
- intel_ring_free(ring);
+ intel_ring_free(ctx->engine[i].ring);
drm_gem_object_unreference(&ctx_obj->base);
}
}
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index 37601a35d5fc..a43d1e5e5f5a 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -71,7 +71,8 @@ void intel_lr_context_free(struct intel_context *ctx);
uint32_t intel_lr_context_size(struct intel_engine_cs *ring);
int intel_lr_context_deferred_alloc(struct intel_context *ctx,
struct intel_engine_cs *ring);
-void intel_lr_context_unpin(struct drm_i915_gem_request *req);
+void intel_lr_context_unpin(struct intel_context *ctx,
+ struct intel_engine_cs *engine);
void intel_lr_context_reset(struct drm_device *dev,
struct intel_context *ctx);
--
2.7.0.rc3
Loading...